00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 937 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3599 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.135 The recommended git tool is: git 00:00:00.135 using credential 00000000-0000-0000-0000-000000000002 00:00:00.137 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.691 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.702 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.713 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.713 > git config core.sparsecheckout # timeout=10 00:00:06.724 > git read-tree -mu HEAD # timeout=10 00:00:06.739 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.756 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.756 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.877 [Pipeline] Start of Pipeline 00:00:06.889 [Pipeline] library 00:00:06.891 Loading library shm_lib@master 00:00:06.891 Library shm_lib@master is cached. Copying from home. 00:00:06.906 [Pipeline] node 00:00:06.916 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.918 [Pipeline] { 00:00:06.928 [Pipeline] catchError 00:00:06.930 [Pipeline] { 00:00:06.943 [Pipeline] wrap 00:00:06.952 [Pipeline] { 00:00:06.960 [Pipeline] stage 00:00:06.962 [Pipeline] { (Prologue) 00:00:07.170 [Pipeline] sh 00:00:07.448 + logger -p user.info -t JENKINS-CI 00:00:07.464 [Pipeline] echo 00:00:07.465 Node: GP11 00:00:07.472 [Pipeline] sh 00:00:07.783 [Pipeline] setCustomBuildProperty 00:00:07.797 [Pipeline] echo 00:00:07.798 Cleanup processes 00:00:07.806 [Pipeline] sh 00:00:08.087 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.087 3584557 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.098 [Pipeline] sh 00:00:08.373 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.374 ++ grep -v 'sudo pgrep' 00:00:08.374 ++ awk '{print $1}' 00:00:08.374 + sudo kill -9 00:00:08.374 + true 00:00:08.388 [Pipeline] cleanWs 00:00:08.398 [WS-CLEANUP] Deleting project workspace... 00:00:08.398 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.403 [WS-CLEANUP] done 00:00:08.409 [Pipeline] setCustomBuildProperty 00:00:08.426 [Pipeline] sh 00:00:08.702 + sudo git config --global --replace-all safe.directory '*' 00:00:08.783 [Pipeline] httpRequest 00:00:09.190 [Pipeline] echo 00:00:09.192 Sorcerer 10.211.164.101 is alive 00:00:09.201 [Pipeline] retry 00:00:09.203 [Pipeline] { 00:00:09.218 [Pipeline] httpRequest 00:00:09.222 HttpMethod: GET 00:00:09.223 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.223 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.235 Response Code: HTTP/1.1 200 OK 00:00:09.236 Success: Status code 200 is in the accepted range: 200,404 00:00:09.236 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:10.902 [Pipeline] } 00:00:10.915 [Pipeline] // retry 00:00:10.923 [Pipeline] sh 00:00:11.201 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:11.215 [Pipeline] httpRequest 00:00:11.598 [Pipeline] echo 00:00:11.600 Sorcerer 10.211.164.101 is alive 00:00:11.611 [Pipeline] retry 00:00:11.614 [Pipeline] { 00:00:11.631 [Pipeline] httpRequest 00:00:11.634 HttpMethod: GET 00:00:11.635 URL: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:11.635 Sending request to url: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:11.636 Response Code: HTTP/1.1 200 OK 00:00:11.637 Success: Status code 200 is in the accepted range: 200,404 00:00:11.637 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:30.996 [Pipeline] } 00:00:31.013 [Pipeline] // retry 00:00:31.020 [Pipeline] sh 00:00:31.307 + tar --no-same-owner -xf spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:33.850 [Pipeline] sh 00:00:34.134 + git -C spdk log --oneline -n5 00:00:34.134 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:00:34.134 12fc2abf1 test: Remove autopackage.sh 00:00:34.134 83ba90867 fio/bdev: fix typo in README 00:00:34.134 45379ed84 module/compress: Cleanup vol data, when claim fails 00:00:34.134 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:00:34.152 [Pipeline] withCredentials 00:00:34.163 > git --version # timeout=10 00:00:34.175 > git --version # 'git version 2.39.2' 00:00:34.194 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:34.197 [Pipeline] { 00:00:34.206 [Pipeline] retry 00:00:34.208 [Pipeline] { 00:00:34.223 [Pipeline] sh 00:00:34.508 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:35.092 [Pipeline] } 00:00:35.110 [Pipeline] // retry 00:00:35.115 [Pipeline] } 00:00:35.131 [Pipeline] // withCredentials 00:00:35.142 [Pipeline] httpRequest 00:00:35.603 [Pipeline] echo 00:00:35.605 Sorcerer 10.211.164.101 is alive 00:00:35.615 [Pipeline] retry 00:00:35.617 [Pipeline] { 00:00:35.631 [Pipeline] httpRequest 00:00:35.636 HttpMethod: GET 00:00:35.636 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:35.637 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:35.652 Response Code: HTTP/1.1 200 OK 00:00:35.652 Success: Status code 200 is in the accepted range: 200,404 00:00:35.653 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:05.578 [Pipeline] } 00:01:05.596 [Pipeline] // retry 00:01:05.603 [Pipeline] sh 00:01:05.889 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:07.806 [Pipeline] sh 00:01:08.093 + git -C dpdk log --oneline -n5 00:01:08.093 eeb0605f11 version: 23.11.0 00:01:08.093 238778122a doc: update release notes for 23.11 00:01:08.093 46aa6b3cfc doc: fix description of RSS features 00:01:08.093 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:08.093 7e421ae345 devtools: support skipping forbid rule check 00:01:08.104 [Pipeline] } 00:01:08.117 [Pipeline] // stage 00:01:08.125 [Pipeline] stage 00:01:08.127 [Pipeline] { (Prepare) 00:01:08.145 [Pipeline] writeFile 00:01:08.158 [Pipeline] sh 00:01:08.442 + logger -p user.info -t JENKINS-CI 00:01:08.455 [Pipeline] sh 00:01:08.742 + logger -p user.info -t JENKINS-CI 00:01:08.756 [Pipeline] sh 00:01:09.037 + cat autorun-spdk.conf 00:01:09.037 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.037 SPDK_TEST_NVMF=1 00:01:09.037 SPDK_TEST_NVME_CLI=1 00:01:09.037 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.037 SPDK_TEST_NVMF_NICS=e810 00:01:09.037 SPDK_TEST_VFIOUSER=1 00:01:09.037 SPDK_RUN_UBSAN=1 00:01:09.037 NET_TYPE=phy 00:01:09.037 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:09.037 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:09.045 RUN_NIGHTLY=1 00:01:09.050 [Pipeline] readFile 00:01:09.072 [Pipeline] withEnv 00:01:09.074 [Pipeline] { 00:01:09.085 [Pipeline] sh 00:01:09.371 + set -ex 00:01:09.371 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.371 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.371 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.371 ++ SPDK_TEST_NVMF=1 00:01:09.371 ++ SPDK_TEST_NVME_CLI=1 00:01:09.371 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.371 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.371 ++ SPDK_TEST_VFIOUSER=1 00:01:09.371 ++ SPDK_RUN_UBSAN=1 00:01:09.371 ++ NET_TYPE=phy 00:01:09.371 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:09.371 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:09.371 ++ RUN_NIGHTLY=1 00:01:09.371 + case $SPDK_TEST_NVMF_NICS in 00:01:09.371 + DRIVERS=ice 00:01:09.371 + [[ tcp == \r\d\m\a ]] 00:01:09.371 + [[ -n ice ]] 00:01:09.371 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.371 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.371 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.371 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.371 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.371 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.371 + true 00:01:09.371 + for D in $DRIVERS 00:01:09.371 + sudo modprobe ice 00:01:09.371 + exit 0 00:01:09.385 [Pipeline] } 00:01:09.432 [Pipeline] // withEnv 00:01:09.441 [Pipeline] } 00:01:09.450 [Pipeline] // stage 00:01:09.456 [Pipeline] catchError 00:01:09.457 [Pipeline] { 00:01:09.465 [Pipeline] timeout 00:01:09.466 Timeout set to expire in 1 hr 0 min 00:01:09.467 [Pipeline] { 00:01:09.477 [Pipeline] stage 00:01:09.479 [Pipeline] { (Tests) 00:01:09.490 [Pipeline] sh 00:01:09.775 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.775 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.775 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.775 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.775 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.775 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.775 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.775 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.775 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.775 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.775 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:09.775 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.775 + source /etc/os-release 00:01:09.775 ++ NAME='Fedora Linux' 00:01:09.775 ++ VERSION='39 (Cloud Edition)' 00:01:09.775 ++ ID=fedora 00:01:09.775 ++ VERSION_ID=39 00:01:09.775 ++ VERSION_CODENAME= 00:01:09.775 ++ PLATFORM_ID=platform:f39 00:01:09.775 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:09.775 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.775 ++ LOGO=fedora-logo-icon 00:01:09.775 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:09.775 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.775 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:09.775 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.775 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.775 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.775 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:09.775 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.775 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:09.775 ++ SUPPORT_END=2024-11-12 00:01:09.775 ++ VARIANT='Cloud Edition' 00:01:09.775 ++ VARIANT_ID=cloud 00:01:09.775 + uname -a 00:01:09.775 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:09.775 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:10.714 Hugepages 00:01:10.714 node hugesize free / total 00:01:10.714 node0 1048576kB 0 / 0 00:01:10.714 node0 2048kB 0 / 0 00:01:10.714 node1 1048576kB 0 / 0 00:01:10.714 node1 2048kB 0 / 0 00:01:10.714 00:01:10.714 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:10.714 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:10.714 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:10.973 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:10.973 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:10.973 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:10.973 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:10.973 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:10.973 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:10.973 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:10.973 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:10.973 + rm -f /tmp/spdk-ld-path 00:01:10.973 + source autorun-spdk.conf 00:01:10.973 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.973 ++ SPDK_TEST_NVMF=1 00:01:10.973 ++ SPDK_TEST_NVME_CLI=1 00:01:10.973 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.973 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.973 ++ SPDK_TEST_VFIOUSER=1 00:01:10.973 ++ SPDK_RUN_UBSAN=1 00:01:10.973 ++ NET_TYPE=phy 00:01:10.973 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:10.973 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:10.973 ++ RUN_NIGHTLY=1 00:01:10.973 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:10.973 + [[ -n '' ]] 00:01:10.973 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.973 + for M in /var/spdk/build-*-manifest.txt 00:01:10.973 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:10.973 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.973 + for M in /var/spdk/build-*-manifest.txt 00:01:10.973 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:10.973 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.973 + for M in /var/spdk/build-*-manifest.txt 00:01:10.973 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:10.973 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.973 ++ uname 00:01:10.973 + [[ Linux == \L\i\n\u\x ]] 00:01:10.973 + sudo dmesg -T 00:01:10.973 + sudo dmesg --clear 00:01:10.973 + dmesg_pid=3585387 00:01:10.973 + [[ Fedora Linux == FreeBSD ]] 00:01:10.973 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.973 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.973 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.973 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.973 + sudo dmesg -Tw 00:01:10.973 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.973 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.973 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.973 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.973 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.973 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.973 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.973 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.973 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.973 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.973 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.973 11:13:11 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:10.973 11:13:11 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:10.973 11:13:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:10.973 11:13:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:10.973 11:13:11 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.233 11:13:11 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:11.233 11:13:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:11.233 11:13:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:11.233 11:13:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.233 11:13:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.233 11:13:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.233 11:13:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.233 11:13:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.233 11:13:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.233 11:13:11 -- paths/export.sh@5 -- $ export PATH 00:01:11.233 11:13:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.233 11:13:11 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:11.233 11:13:11 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:11.233 11:13:11 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730542391.XXXXXX 00:01:11.233 11:13:11 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730542391.iNoppr 00:01:11.234 11:13:11 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:11.234 11:13:11 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:01:11.234 11:13:11 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:11.234 11:13:11 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:11.234 11:13:11 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.234 11:13:11 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.234 11:13:11 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:11.234 11:13:11 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:11.234 11:13:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.234 11:13:11 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:11.234 11:13:11 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:11.234 11:13:11 -- pm/common@17 -- $ local monitor 00:01:11.234 11:13:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.234 11:13:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.234 11:13:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.234 11:13:11 -- pm/common@21 -- $ date +%s 00:01:11.234 11:13:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.234 11:13:11 -- pm/common@21 -- $ date +%s 00:01:11.234 11:13:11 -- pm/common@25 -- $ sleep 1 00:01:11.234 11:13:11 -- pm/common@21 -- $ date +%s 00:01:11.234 11:13:11 -- pm/common@21 -- $ date +%s 00:01:11.234 11:13:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730542391 00:01:11.234 11:13:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730542391 00:01:11.234 11:13:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730542391 00:01:11.234 11:13:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730542391 00:01:11.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730542391_collect-cpu-load.pm.log 00:01:11.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730542391_collect-vmstat.pm.log 00:01:11.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730542391_collect-cpu-temp.pm.log 00:01:11.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730542391_collect-bmc-pm.bmc.pm.log 00:01:12.176 11:13:12 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:12.176 11:13:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.176 11:13:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.176 11:13:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.176 11:13:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.176 Sat Nov 2 10:13:12 AM UTC 2024 00:01:12.176 11:13:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.176 v25.01-pre-124-gfa3ab7384 00:01:12.176 11:13:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.176 11:13:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.176 11:13:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.176 11:13:12 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:12.176 11:13:12 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:12.176 11:13:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.176 ************************************ 00:01:12.176 START TEST ubsan 00:01:12.176 ************************************ 00:01:12.176 11:13:12 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:12.176 using ubsan 00:01:12.176 00:01:12.176 real 0m0.000s 00:01:12.176 user 0m0.000s 00:01:12.176 sys 0m0.000s 00:01:12.176 11:13:12 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:12.176 11:13:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.176 ************************************ 00:01:12.176 END TEST ubsan 00:01:12.176 ************************************ 00:01:12.176 11:13:12 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:12.176 11:13:12 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:12.176 11:13:12 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:12.176 11:13:12 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:01:12.176 11:13:12 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:12.176 11:13:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.176 ************************************ 00:01:12.176 START TEST build_native_dpdk 00:01:12.176 ************************************ 00:01:12.176 11:13:12 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:12.176 eeb0605f11 version: 23.11.0 00:01:12.176 238778122a doc: update release notes for 23.11 00:01:12.176 46aa6b3cfc doc: fix description of RSS features 00:01:12.176 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:12.176 7e421ae345 devtools: support skipping forbid rule check 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:12.176 11:13:12 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:12.176 11:13:12 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:12.177 patching file config/rte_config.h 00:01:12.177 Hunk #1 succeeded at 60 (offset 1 line). 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:12.177 patching file lib/pcapng/rte_pcapng.c 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:12.177 11:13:12 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:12.177 11:13:12 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:16.382 The Meson build system 00:01:16.382 Version: 1.5.0 00:01:16.382 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.382 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:16.382 Build type: native build 00:01:16.382 Program cat found: YES (/usr/bin/cat) 00:01:16.382 Project name: DPDK 00:01:16.382 Project version: 23.11.0 00:01:16.382 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:16.382 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:16.382 Host machine cpu family: x86_64 00:01:16.382 Host machine cpu: x86_64 00:01:16.382 Message: ## Building in Developer Mode ## 00:01:16.382 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:16.382 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:16.382 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:16.382 Program python3 found: YES (/usr/bin/python3) 00:01:16.382 Program cat found: YES (/usr/bin/cat) 00:01:16.382 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:16.382 Compiler for C supports arguments -march=native: YES 00:01:16.382 Checking for size of "void *" : 8 00:01:16.382 Checking for size of "void *" : 8 (cached) 00:01:16.382 Library m found: YES 00:01:16.382 Library numa found: YES 00:01:16.382 Has header "numaif.h" : YES 00:01:16.382 Library fdt found: NO 00:01:16.382 Library execinfo found: NO 00:01:16.382 Has header "execinfo.h" : YES 00:01:16.382 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:16.382 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:16.382 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:16.382 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:16.382 Run-time dependency openssl found: YES 3.1.1 00:01:16.382 Run-time dependency libpcap found: YES 1.10.4 00:01:16.382 Has header "pcap.h" with dependency libpcap: YES 00:01:16.382 Compiler for C supports arguments -Wcast-qual: YES 00:01:16.382 Compiler for C supports arguments -Wdeprecated: YES 00:01:16.382 Compiler for C supports arguments -Wformat: YES 00:01:16.382 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:16.382 Compiler for C supports arguments -Wformat-security: NO 00:01:16.382 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.382 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:16.382 Compiler for C supports arguments -Wnested-externs: YES 00:01:16.382 Compiler for C supports arguments -Wold-style-definition: YES 00:01:16.382 Compiler for C supports arguments -Wpointer-arith: YES 00:01:16.382 Compiler for C supports arguments -Wsign-compare: YES 00:01:16.382 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:16.382 Compiler for C supports arguments -Wundef: YES 00:01:16.382 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.382 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:16.382 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:16.382 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.382 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:16.382 Program objdump found: YES (/usr/bin/objdump) 00:01:16.382 Compiler for C supports arguments -mavx512f: YES 00:01:16.382 Checking if "AVX512 checking" compiles: YES 00:01:16.382 Fetching value of define "__SSE4_2__" : 1 00:01:16.382 Fetching value of define "__AES__" : 1 00:01:16.382 Fetching value of define "__AVX__" : 1 00:01:16.382 Fetching value of define "__AVX2__" : (undefined) 00:01:16.382 Fetching value of define "__AVX512BW__" : (undefined) 00:01:16.382 Fetching value of define "__AVX512CD__" : (undefined) 00:01:16.382 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:16.382 Fetching value of define "__AVX512F__" : (undefined) 00:01:16.382 Fetching value of define "__AVX512VL__" : (undefined) 00:01:16.382 Fetching value of define "__PCLMUL__" : 1 00:01:16.382 Fetching value of define "__RDRND__" : 1 00:01:16.382 Fetching value of define "__RDSEED__" : (undefined) 00:01:16.382 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:16.382 Fetching value of define "__znver1__" : (undefined) 00:01:16.382 Fetching value of define "__znver2__" : (undefined) 00:01:16.382 Fetching value of define "__znver3__" : (undefined) 00:01:16.382 Fetching value of define "__znver4__" : (undefined) 00:01:16.382 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:16.382 Message: lib/log: Defining dependency "log" 00:01:16.382 Message: lib/kvargs: Defining dependency "kvargs" 00:01:16.382 Message: lib/telemetry: Defining dependency "telemetry" 00:01:16.382 Checking for function "getentropy" : NO 00:01:16.382 Message: lib/eal: Defining dependency "eal" 00:01:16.382 Message: lib/ring: Defining dependency "ring" 00:01:16.382 Message: lib/rcu: Defining dependency "rcu" 00:01:16.382 Message: lib/mempool: Defining dependency "mempool" 00:01:16.382 Message: lib/mbuf: Defining dependency "mbuf" 00:01:16.382 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:16.382 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:16.382 Compiler for C supports arguments -mpclmul: YES 00:01:16.382 Compiler for C supports arguments -maes: YES 00:01:16.382 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:16.382 Compiler for C supports arguments -mavx512bw: YES 00:01:16.382 Compiler for C supports arguments -mavx512dq: YES 00:01:16.382 Compiler for C supports arguments -mavx512vl: YES 00:01:16.382 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:16.382 Compiler for C supports arguments -mavx2: YES 00:01:16.382 Compiler for C supports arguments -mavx: YES 00:01:16.382 Message: lib/net: Defining dependency "net" 00:01:16.382 Message: lib/meter: Defining dependency "meter" 00:01:16.382 Message: lib/ethdev: Defining dependency "ethdev" 00:01:16.382 Message: lib/pci: Defining dependency "pci" 00:01:16.382 Message: lib/cmdline: Defining dependency "cmdline" 00:01:16.382 Message: lib/metrics: Defining dependency "metrics" 00:01:16.382 Message: lib/hash: Defining dependency "hash" 00:01:16.382 Message: lib/timer: Defining dependency "timer" 00:01:16.382 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:16.382 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:16.382 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:16.382 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:16.382 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:16.382 Message: lib/acl: Defining dependency "acl" 00:01:16.382 Message: lib/bbdev: Defining dependency "bbdev" 00:01:16.382 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:16.382 Run-time dependency libelf found: YES 0.191 00:01:16.382 Message: lib/bpf: Defining dependency "bpf" 00:01:16.382 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:16.382 Message: lib/compressdev: Defining dependency "compressdev" 00:01:16.382 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:16.382 Message: lib/distributor: Defining dependency "distributor" 00:01:16.382 Message: lib/dmadev: Defining dependency "dmadev" 00:01:16.382 Message: lib/efd: Defining dependency "efd" 00:01:16.382 Message: lib/eventdev: Defining dependency "eventdev" 00:01:16.382 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:16.382 Message: lib/gpudev: Defining dependency "gpudev" 00:01:16.382 Message: lib/gro: Defining dependency "gro" 00:01:16.382 Message: lib/gso: Defining dependency "gso" 00:01:16.382 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:16.382 Message: lib/jobstats: Defining dependency "jobstats" 00:01:16.382 Message: lib/latencystats: Defining dependency "latencystats" 00:01:16.382 Message: lib/lpm: Defining dependency "lpm" 00:01:16.382 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:16.382 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:16.382 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:16.382 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:16.382 Message: lib/member: Defining dependency "member" 00:01:16.382 Message: lib/pcapng: Defining dependency "pcapng" 00:01:16.382 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:16.382 Message: lib/power: Defining dependency "power" 00:01:16.382 Message: lib/rawdev: Defining dependency "rawdev" 00:01:16.382 Message: lib/regexdev: Defining dependency "regexdev" 00:01:16.382 Message: lib/mldev: Defining dependency "mldev" 00:01:16.382 Message: lib/rib: Defining dependency "rib" 00:01:16.382 Message: lib/reorder: Defining dependency "reorder" 00:01:16.382 Message: lib/sched: Defining dependency "sched" 00:01:16.382 Message: lib/security: Defining dependency "security" 00:01:16.382 Message: lib/stack: Defining dependency "stack" 00:01:16.382 Has header "linux/userfaultfd.h" : YES 00:01:16.382 Has header "linux/vduse.h" : YES 00:01:16.382 Message: lib/vhost: Defining dependency "vhost" 00:01:16.382 Message: lib/ipsec: Defining dependency "ipsec" 00:01:16.382 Message: lib/pdcp: Defining dependency "pdcp" 00:01:16.382 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:16.382 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:16.382 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:16.382 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:16.382 Message: lib/fib: Defining dependency "fib" 00:01:16.382 Message: lib/port: Defining dependency "port" 00:01:16.382 Message: lib/pdump: Defining dependency "pdump" 00:01:16.382 Message: lib/table: Defining dependency "table" 00:01:16.382 Message: lib/pipeline: Defining dependency "pipeline" 00:01:16.382 Message: lib/graph: Defining dependency "graph" 00:01:16.382 Message: lib/node: Defining dependency "node" 00:01:18.300 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:18.300 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:18.300 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:18.300 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:18.300 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:18.300 Compiler for C supports arguments -Wno-unused-value: YES 00:01:18.300 Compiler for C supports arguments -Wno-format: YES 00:01:18.300 Compiler for C supports arguments -Wno-format-security: YES 00:01:18.300 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:18.300 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:18.300 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:18.300 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:18.300 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:18.300 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:18.300 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:18.300 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:18.300 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:18.300 Has header "sys/epoll.h" : YES 00:01:18.300 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:18.300 Configuring doxy-api-html.conf using configuration 00:01:18.300 Configuring doxy-api-man.conf using configuration 00:01:18.300 Program mandb found: YES (/usr/bin/mandb) 00:01:18.300 Program sphinx-build found: NO 00:01:18.300 Configuring rte_build_config.h using configuration 00:01:18.300 Message: 00:01:18.300 ================= 00:01:18.300 Applications Enabled 00:01:18.300 ================= 00:01:18.300 00:01:18.300 apps: 00:01:18.300 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:18.300 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:18.300 test-pmd, test-regex, test-sad, test-security-perf, 00:01:18.300 00:01:18.300 Message: 00:01:18.300 ================= 00:01:18.300 Libraries Enabled 00:01:18.300 ================= 00:01:18.300 00:01:18.300 libs: 00:01:18.300 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:18.300 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:18.300 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:18.300 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:18.300 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:18.300 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:18.300 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:18.300 00:01:18.300 00:01:18.300 Message: 00:01:18.300 =============== 00:01:18.300 Drivers Enabled 00:01:18.300 =============== 00:01:18.300 00:01:18.300 common: 00:01:18.300 00:01:18.300 bus: 00:01:18.300 pci, vdev, 00:01:18.300 mempool: 00:01:18.300 ring, 00:01:18.300 dma: 00:01:18.300 00:01:18.300 net: 00:01:18.300 i40e, 00:01:18.300 raw: 00:01:18.300 00:01:18.300 crypto: 00:01:18.300 00:01:18.300 compress: 00:01:18.300 00:01:18.300 regex: 00:01:18.300 00:01:18.300 ml: 00:01:18.300 00:01:18.300 vdpa: 00:01:18.300 00:01:18.300 event: 00:01:18.300 00:01:18.300 baseband: 00:01:18.300 00:01:18.300 gpu: 00:01:18.300 00:01:18.300 00:01:18.300 Message: 00:01:18.300 ================= 00:01:18.300 Content Skipped 00:01:18.300 ================= 00:01:18.300 00:01:18.300 apps: 00:01:18.300 00:01:18.300 libs: 00:01:18.300 00:01:18.300 drivers: 00:01:18.300 common/cpt: not in enabled drivers build config 00:01:18.300 common/dpaax: not in enabled drivers build config 00:01:18.300 common/iavf: not in enabled drivers build config 00:01:18.300 common/idpf: not in enabled drivers build config 00:01:18.300 common/mvep: not in enabled drivers build config 00:01:18.300 common/octeontx: not in enabled drivers build config 00:01:18.300 bus/auxiliary: not in enabled drivers build config 00:01:18.300 bus/cdx: not in enabled drivers build config 00:01:18.300 bus/dpaa: not in enabled drivers build config 00:01:18.300 bus/fslmc: not in enabled drivers build config 00:01:18.300 bus/ifpga: not in enabled drivers build config 00:01:18.300 bus/platform: not in enabled drivers build config 00:01:18.300 bus/vmbus: not in enabled drivers build config 00:01:18.300 common/cnxk: not in enabled drivers build config 00:01:18.300 common/mlx5: not in enabled drivers build config 00:01:18.300 common/nfp: not in enabled drivers build config 00:01:18.300 common/qat: not in enabled drivers build config 00:01:18.300 common/sfc_efx: not in enabled drivers build config 00:01:18.300 mempool/bucket: not in enabled drivers build config 00:01:18.300 mempool/cnxk: not in enabled drivers build config 00:01:18.300 mempool/dpaa: not in enabled drivers build config 00:01:18.300 mempool/dpaa2: not in enabled drivers build config 00:01:18.300 mempool/octeontx: not in enabled drivers build config 00:01:18.300 mempool/stack: not in enabled drivers build config 00:01:18.300 dma/cnxk: not in enabled drivers build config 00:01:18.300 dma/dpaa: not in enabled drivers build config 00:01:18.301 dma/dpaa2: not in enabled drivers build config 00:01:18.301 dma/hisilicon: not in enabled drivers build config 00:01:18.301 dma/idxd: not in enabled drivers build config 00:01:18.301 dma/ioat: not in enabled drivers build config 00:01:18.301 dma/skeleton: not in enabled drivers build config 00:01:18.301 net/af_packet: not in enabled drivers build config 00:01:18.301 net/af_xdp: not in enabled drivers build config 00:01:18.301 net/ark: not in enabled drivers build config 00:01:18.301 net/atlantic: not in enabled drivers build config 00:01:18.301 net/avp: not in enabled drivers build config 00:01:18.301 net/axgbe: not in enabled drivers build config 00:01:18.301 net/bnx2x: not in enabled drivers build config 00:01:18.301 net/bnxt: not in enabled drivers build config 00:01:18.301 net/bonding: not in enabled drivers build config 00:01:18.301 net/cnxk: not in enabled drivers build config 00:01:18.301 net/cpfl: not in enabled drivers build config 00:01:18.301 net/cxgbe: not in enabled drivers build config 00:01:18.301 net/dpaa: not in enabled drivers build config 00:01:18.301 net/dpaa2: not in enabled drivers build config 00:01:18.301 net/e1000: not in enabled drivers build config 00:01:18.301 net/ena: not in enabled drivers build config 00:01:18.301 net/enetc: not in enabled drivers build config 00:01:18.301 net/enetfec: not in enabled drivers build config 00:01:18.301 net/enic: not in enabled drivers build config 00:01:18.301 net/failsafe: not in enabled drivers build config 00:01:18.301 net/fm10k: not in enabled drivers build config 00:01:18.301 net/gve: not in enabled drivers build config 00:01:18.301 net/hinic: not in enabled drivers build config 00:01:18.301 net/hns3: not in enabled drivers build config 00:01:18.301 net/iavf: not in enabled drivers build config 00:01:18.301 net/ice: not in enabled drivers build config 00:01:18.301 net/idpf: not in enabled drivers build config 00:01:18.301 net/igc: not in enabled drivers build config 00:01:18.301 net/ionic: not in enabled drivers build config 00:01:18.301 net/ipn3ke: not in enabled drivers build config 00:01:18.301 net/ixgbe: not in enabled drivers build config 00:01:18.301 net/mana: not in enabled drivers build config 00:01:18.301 net/memif: not in enabled drivers build config 00:01:18.301 net/mlx4: not in enabled drivers build config 00:01:18.301 net/mlx5: not in enabled drivers build config 00:01:18.301 net/mvneta: not in enabled drivers build config 00:01:18.301 net/mvpp2: not in enabled drivers build config 00:01:18.301 net/netvsc: not in enabled drivers build config 00:01:18.301 net/nfb: not in enabled drivers build config 00:01:18.301 net/nfp: not in enabled drivers build config 00:01:18.301 net/ngbe: not in enabled drivers build config 00:01:18.301 net/null: not in enabled drivers build config 00:01:18.301 net/octeontx: not in enabled drivers build config 00:01:18.301 net/octeon_ep: not in enabled drivers build config 00:01:18.301 net/pcap: not in enabled drivers build config 00:01:18.301 net/pfe: not in enabled drivers build config 00:01:18.301 net/qede: not in enabled drivers build config 00:01:18.301 net/ring: not in enabled drivers build config 00:01:18.301 net/sfc: not in enabled drivers build config 00:01:18.301 net/softnic: not in enabled drivers build config 00:01:18.301 net/tap: not in enabled drivers build config 00:01:18.301 net/thunderx: not in enabled drivers build config 00:01:18.301 net/txgbe: not in enabled drivers build config 00:01:18.301 net/vdev_netvsc: not in enabled drivers build config 00:01:18.301 net/vhost: not in enabled drivers build config 00:01:18.301 net/virtio: not in enabled drivers build config 00:01:18.301 net/vmxnet3: not in enabled drivers build config 00:01:18.301 raw/cnxk_bphy: not in enabled drivers build config 00:01:18.301 raw/cnxk_gpio: not in enabled drivers build config 00:01:18.301 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:18.301 raw/ifpga: not in enabled drivers build config 00:01:18.301 raw/ntb: not in enabled drivers build config 00:01:18.301 raw/skeleton: not in enabled drivers build config 00:01:18.301 crypto/armv8: not in enabled drivers build config 00:01:18.301 crypto/bcmfs: not in enabled drivers build config 00:01:18.301 crypto/caam_jr: not in enabled drivers build config 00:01:18.301 crypto/ccp: not in enabled drivers build config 00:01:18.301 crypto/cnxk: not in enabled drivers build config 00:01:18.301 crypto/dpaa_sec: not in enabled drivers build config 00:01:18.301 crypto/dpaa2_sec: not in enabled drivers build config 00:01:18.301 crypto/ipsec_mb: not in enabled drivers build config 00:01:18.301 crypto/mlx5: not in enabled drivers build config 00:01:18.301 crypto/mvsam: not in enabled drivers build config 00:01:18.301 crypto/nitrox: not in enabled drivers build config 00:01:18.301 crypto/null: not in enabled drivers build config 00:01:18.301 crypto/octeontx: not in enabled drivers build config 00:01:18.301 crypto/openssl: not in enabled drivers build config 00:01:18.301 crypto/scheduler: not in enabled drivers build config 00:01:18.301 crypto/uadk: not in enabled drivers build config 00:01:18.301 crypto/virtio: not in enabled drivers build config 00:01:18.301 compress/isal: not in enabled drivers build config 00:01:18.301 compress/mlx5: not in enabled drivers build config 00:01:18.301 compress/octeontx: not in enabled drivers build config 00:01:18.301 compress/zlib: not in enabled drivers build config 00:01:18.301 regex/mlx5: not in enabled drivers build config 00:01:18.301 regex/cn9k: not in enabled drivers build config 00:01:18.301 ml/cnxk: not in enabled drivers build config 00:01:18.301 vdpa/ifc: not in enabled drivers build config 00:01:18.301 vdpa/mlx5: not in enabled drivers build config 00:01:18.301 vdpa/nfp: not in enabled drivers build config 00:01:18.301 vdpa/sfc: not in enabled drivers build config 00:01:18.301 event/cnxk: not in enabled drivers build config 00:01:18.301 event/dlb2: not in enabled drivers build config 00:01:18.301 event/dpaa: not in enabled drivers build config 00:01:18.301 event/dpaa2: not in enabled drivers build config 00:01:18.301 event/dsw: not in enabled drivers build config 00:01:18.301 event/opdl: not in enabled drivers build config 00:01:18.301 event/skeleton: not in enabled drivers build config 00:01:18.301 event/sw: not in enabled drivers build config 00:01:18.301 event/octeontx: not in enabled drivers build config 00:01:18.301 baseband/acc: not in enabled drivers build config 00:01:18.301 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:18.301 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:18.301 baseband/la12xx: not in enabled drivers build config 00:01:18.301 baseband/null: not in enabled drivers build config 00:01:18.301 baseband/turbo_sw: not in enabled drivers build config 00:01:18.301 gpu/cuda: not in enabled drivers build config 00:01:18.301 00:01:18.301 00:01:18.301 Build targets in project: 220 00:01:18.301 00:01:18.301 DPDK 23.11.0 00:01:18.301 00:01:18.301 User defined options 00:01:18.301 libdir : lib 00:01:18.301 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.301 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:18.301 c_link_args : 00:01:18.301 enable_docs : false 00:01:18.301 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:18.301 enable_kmods : false 00:01:18.301 machine : native 00:01:18.301 tests : false 00:01:18.301 00:01:18.301 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:18.301 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:18.301 11:13:18 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:18.301 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:18.562 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:18.562 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:18.562 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:18.562 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:18.562 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:18.562 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:18.562 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:18.562 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:18.562 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:18.562 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:18.562 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:18.562 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:18.562 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:18.562 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:18.562 [15/710] Linking static target lib/librte_kvargs.a 00:01:18.562 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:18.562 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:18.822 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:18.822 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:18.822 [20/710] Linking static target lib/librte_log.a 00:01:18.822 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:19.086 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.662 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.662 [24/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.662 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.662 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.662 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.662 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.662 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.662 [30/710] Linking target lib/librte_log.so.24.0 00:01:19.662 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.662 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.662 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.662 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.662 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.662 [36/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:19.662 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:19.662 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:19.662 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:19.662 [40/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.662 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.662 [42/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.662 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.662 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.662 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.662 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.662 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.662 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:19.924 [49/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:19.924 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.924 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.924 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.924 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.924 [54/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:19.924 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.924 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.924 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.924 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.924 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.924 [60/710] Linking target lib/librte_kvargs.so.24.0 00:01:19.924 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.924 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.924 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:19.924 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:20.183 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:20.183 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:20.183 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:20.183 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:20.183 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:20.183 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:20.446 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:20.446 [72/710] Linking static target lib/librte_pci.a 00:01:20.446 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:20.446 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:20.446 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:20.446 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:20.709 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:20.709 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:20.709 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:20.709 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:20.710 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:20.710 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:20.710 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:20.710 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:20.710 [85/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:20.710 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:20.710 [87/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.710 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:20.972 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:20.972 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:20.972 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:20.972 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:20.972 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:20.972 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:20.972 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:20.972 [96/710] Linking static target lib/librte_ring.a 00:01:20.972 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:20.972 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:20.972 [99/710] Linking static target lib/librte_meter.a 00:01:20.972 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:20.972 [101/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:20.972 [102/710] Linking static target lib/librte_telemetry.a 00:01:20.972 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.972 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:20.972 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:20.972 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:21.235 [107/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:21.235 [108/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:21.235 [109/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:21.235 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:21.235 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:21.235 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:21.235 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:21.235 [114/710] Linking static target lib/librte_eal.a 00:01:21.507 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:21.507 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.507 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.507 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:21.507 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:21.507 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:21.507 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:21.507 [122/710] Linking static target lib/librte_net.a 00:01:21.507 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:21.507 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:21.507 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:21.796 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:21.796 [127/710] Linking static target lib/librte_cmdline.a 00:01:21.796 [128/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.796 [129/710] Linking target lib/librte_telemetry.so.24.0 00:01:21.796 [130/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:21.796 [131/710] Linking static target lib/librte_mempool.a 00:01:21.796 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.065 [133/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.065 [134/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:22.065 [135/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:22.065 [136/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:22.065 [137/710] Linking static target lib/librte_cfgfile.a 00:01:22.065 [138/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:22.065 [139/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:22.065 [140/710] Linking static target lib/librte_metrics.a 00:01:22.065 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:22.065 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:22.065 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:22.329 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:22.329 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:22.329 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:22.329 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:22.329 [148/710] Linking static target lib/librte_rcu.a 00:01:22.329 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:22.329 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:22.329 [151/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:22.329 [152/710] Linking static target lib/librte_bitratestats.a 00:01:22.593 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:22.593 [154/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.593 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:22.593 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:22.593 [157/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:22.593 [158/710] Linking static target lib/librte_timer.a 00:01:22.593 [159/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.593 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:22.855 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:22.855 [162/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.855 [163/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:22.855 [164/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:22.855 [165/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.855 [166/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.855 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:23.120 [168/710] Linking static target lib/librte_bbdev.a 00:01:23.120 [169/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.120 [170/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:23.120 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:23.120 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:23.120 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:23.120 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:23.120 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:23.120 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.382 [177/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:23.382 [178/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:23.382 [179/710] Linking static target lib/librte_compressdev.a 00:01:23.382 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:23.382 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:23.643 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:23.643 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:23.643 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:23.643 [185/710] Linking static target lib/librte_distributor.a 00:01:23.643 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:23.906 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:23.906 [188/710] Linking static target lib/librte_bpf.a 00:01:23.906 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:23.906 [190/710] Linking static target lib/librte_dmadev.a 00:01:23.906 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.167 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:24.167 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:24.167 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:24.167 [195/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.167 [196/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.167 [197/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:24.167 [198/710] Linking static target lib/librte_dispatcher.a 00:01:24.167 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:24.167 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:24.429 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:24.429 [202/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.429 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:24.429 [204/710] Linking static target lib/librte_gpudev.a 00:01:24.429 [205/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:24.429 [206/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:24.429 [207/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.429 [208/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:24.429 [209/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:24.429 [210/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.429 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:24.429 [212/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:24.429 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.693 [214/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:24.693 [215/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.693 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:24.693 [217/710] Linking static target lib/librte_gro.a 00:01:24.693 [218/710] Linking static target lib/librte_jobstats.a 00:01:24.693 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:24.959 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:24.959 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:24.959 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.959 [223/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.959 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:25.222 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.222 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:25.222 [227/710] Linking static target lib/librte_latencystats.a 00:01:25.222 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:25.222 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:25.222 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:25.222 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:25.222 [232/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:25.222 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:25.222 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:25.484 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:25.484 [236/710] Linking static target lib/librte_ip_frag.a 00:01:25.484 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:25.484 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.484 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:25.754 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.754 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.754 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:25.754 [243/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.754 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:25.754 [245/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.754 [246/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:25.754 [247/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.754 [248/710] Linking static target lib/librte_gso.a 00:01:26.017 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:26.017 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:26.017 [251/710] Linking static target lib/librte_regexdev.a 00:01:26.017 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:26.277 [253/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:26.277 [254/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.277 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:26.277 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:26.277 [257/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:26.277 [258/710] Linking static target lib/librte_rawdev.a 00:01:26.277 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:26.277 [260/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:26.277 [261/710] Linking static target lib/librte_efd.a 00:01:26.277 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:26.277 [263/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:26.277 [264/710] Linking static target lib/librte_pcapng.a 00:01:26.277 [265/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:26.542 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:26.542 [267/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:26.542 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:26.542 [269/710] Linking static target lib/librte_mldev.a 00:01:26.542 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:26.542 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:26.542 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:26.542 [273/710] Linking static target lib/librte_stack.a 00:01:26.542 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:26.542 [275/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:26.542 [276/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.803 [277/710] Linking static target lib/librte_lpm.a 00:01:26.804 [278/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:26.804 [279/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:26.804 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:26.804 [281/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:26.804 [282/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.804 [283/710] Linking static target lib/librte_hash.a 00:01:26.804 [284/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.069 [285/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.069 [286/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.069 [287/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:27.069 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:27.069 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:01:27.069 [290/710] Linking static target lib/librte_acl.a 00:01:27.069 [291/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.069 [292/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.069 [293/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.069 [294/710] Linking static target lib/librte_reorder.a 00:01:27.069 [295/710] Linking static target lib/librte_power.a 00:01:27.332 [296/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.332 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.332 [298/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.332 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.332 [300/710] Linking static target lib/librte_security.a 00:01:27.332 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:27.601 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.601 [303/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.601 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.601 [305/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:27.601 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.601 [307/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.601 [308/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:27.861 [309/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.861 [310/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:27.861 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:27.861 [312/710] Linking static target lib/librte_rib.a 00:01:27.861 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:27.861 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:27.861 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:27.861 [316/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:27.861 [317/710] Linking static target lib/librte_mbuf.a 00:01:28.123 [318/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:28.123 [319/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:28.123 [320/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.123 [321/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:28.123 [322/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:28.123 [323/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:28.123 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:28.123 [325/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:28.123 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.389 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:28.389 [328/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:28.389 [329/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.650 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.650 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:28.650 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:28.650 [333/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:28.650 [334/710] Linking static target lib/librte_eventdev.a 00:01:28.914 [335/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.914 [336/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:28.914 [337/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:28.914 [338/710] Linking static target lib/librte_member.a 00:01:28.914 [339/710] Linking static target lib/librte_cryptodev.a 00:01:28.914 [340/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:28.914 [341/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:29.178 [342/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:29.178 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:29.178 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:29.178 [345/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:29.178 [346/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:29.178 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:29.442 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:29.442 [349/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:29.442 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:29.442 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:29.442 [352/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:29.442 [353/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:29.442 [354/710] Linking static target lib/librte_ethdev.a 00:01:29.442 [355/710] Linking static target lib/librte_sched.a 00:01:29.442 [356/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:29.442 [357/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.442 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:29.442 [359/710] Linking static target lib/librte_fib.a 00:01:29.442 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:29.700 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:29.700 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:29.700 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:29.700 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:29.700 [365/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:29.700 [366/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:29.961 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:29.961 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:29.961 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:29.961 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.961 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.220 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:30.220 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:30.220 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:30.220 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:30.482 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:30.482 [377/710] Linking static target lib/librte_pdump.a 00:01:30.482 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:30.482 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:30.482 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:30.482 [381/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:30.482 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:30.482 [383/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:30.745 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:30.745 [385/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:30.745 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:30.745 [387/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:30.745 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:30.745 [389/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:30.745 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.010 [391/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:31.010 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:31.010 [393/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:31.010 [394/710] Linking static target lib/librte_table.a 00:01:31.010 [395/710] Linking static target lib/librte_ipsec.a 00:01:31.010 [396/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.010 [397/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:31.273 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:31.273 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:31.535 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:31.535 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:31.535 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.799 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.800 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:31.800 [405/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:31.800 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:32.064 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:32.064 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:32.064 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:32.064 [410/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:32.064 [411/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:32.064 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.064 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.064 [414/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.064 [415/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.329 [416/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:32.329 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:32.329 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:32.329 [419/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.329 [420/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:32.329 [421/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.590 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.590 [423/710] Linking static target drivers/librte_bus_vdev.a 00:01:32.590 [424/710] Linking target lib/librte_eal.so.24.0 00:01:32.590 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:32.590 [426/710] Linking static target lib/librte_port.a 00:01:32.590 [427/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:32.590 [428/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.857 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:32.857 [430/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:32.857 [431/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.857 [432/710] Linking target lib/librte_ring.so.24.0 00:01:32.857 [433/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.857 [434/710] Linking target lib/librte_meter.so.24.0 00:01:32.857 [435/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:32.857 [436/710] Linking target lib/librte_pci.so.24.0 00:01:33.122 [437/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:33.122 [438/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:33.122 [439/710] Linking target lib/librte_timer.so.24.0 00:01:33.122 [440/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:33.122 [441/710] Linking target lib/librte_cfgfile.so.24.0 00:01:33.122 [442/710] Linking target lib/librte_acl.so.24.0 00:01:33.122 [443/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:33.122 [444/710] Linking target lib/librte_dmadev.so.24.0 00:01:33.122 [445/710] Linking target lib/librte_jobstats.so.24.0 00:01:33.122 [446/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:33.122 [447/710] Linking target lib/librte_mempool.so.24.0 00:01:33.122 [448/710] Linking target lib/librte_rcu.so.24.0 00:01:33.122 [449/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:33.381 [450/710] Linking target lib/librte_rawdev.so.24.0 00:01:33.381 [451/710] Linking target lib/librte_stack.so.24.0 00:01:33.381 [452/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:33.381 [453/710] Linking static target lib/librte_graph.a 00:01:33.381 [454/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:33.381 [455/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.381 [456/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.381 [457/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:33.381 [458/710] Linking static target drivers/librte_bus_pci.a 00:01:33.381 [459/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:33.381 [460/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:33.381 [461/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:33.381 [462/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.381 [463/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:33.381 [464/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:33.381 [465/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:33.381 [466/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:33.381 [467/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:33.381 [468/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:33.644 [469/710] Linking target lib/librte_rib.so.24.0 00:01:33.645 [470/710] Linking target lib/librte_mbuf.so.24.0 00:01:33.645 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:33.645 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:33.645 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.645 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.645 [475/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:33.645 [476/710] Linking static target drivers/librte_mempool_ring.a 00:01:33.645 [477/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:33.911 [478/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.911 [479/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:33.911 [480/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:33.911 [481/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:33.911 [482/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:33.911 [483/710] Linking target lib/librte_fib.so.24.0 00:01:33.911 [484/710] Linking target lib/librte_net.so.24.0 00:01:33.911 [485/710] Linking target lib/librte_bbdev.so.24.0 00:01:33.911 [486/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:33.911 [487/710] Linking target lib/librte_distributor.so.24.0 00:01:33.911 [488/710] Linking target lib/librte_compressdev.so.24.0 00:01:33.911 [489/710] Linking target lib/librte_cryptodev.so.24.0 00:01:33.911 [490/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:33.911 [491/710] Linking target lib/librte_gpudev.so.24.0 00:01:33.911 [492/710] Linking target lib/librte_regexdev.so.24.0 00:01:33.911 [493/710] Linking target lib/librte_mldev.so.24.0 00:01:33.911 [494/710] Linking target lib/librte_reorder.so.24.0 00:01:33.911 [495/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:33.911 [496/710] Linking target lib/librte_sched.so.24.0 00:01:33.911 [497/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:34.174 [498/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:34.174 [499/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:34.174 [500/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:34.174 [501/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:34.174 [502/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:34.174 [503/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:34.174 [504/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.174 [505/710] Linking target lib/librte_cmdline.so.24.0 00:01:34.174 [506/710] Linking target lib/librte_hash.so.24.0 00:01:34.174 [507/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:34.174 [508/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:34.174 [509/710] Linking target lib/librte_security.so.24.0 00:01:34.174 [510/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:34.174 [511/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.435 [512/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:34.435 [513/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:34.435 [514/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:34.435 [515/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:34.435 [516/710] Linking target lib/librte_efd.so.24.0 00:01:34.435 [517/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:34.435 [518/710] Linking target lib/librte_lpm.so.24.0 00:01:34.696 [519/710] Linking target lib/librte_member.so.24.0 00:01:34.696 [520/710] Linking target lib/librte_ipsec.so.24.0 00:01:34.696 [521/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:34.696 [522/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:34.696 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:34.696 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:34.958 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:34.958 [526/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:34.958 [527/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:34.958 [528/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:34.958 [529/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:34.958 [530/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:35.221 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:35.221 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:35.221 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:35.482 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:35.482 [535/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:35.482 [536/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:35.482 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:35.482 [538/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:35.482 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:35.748 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:35.748 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:36.014 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:36.014 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:36.014 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:36.014 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:36.014 [546/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:36.274 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:36.274 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:36.274 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:36.274 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:36.274 [551/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:36.274 [552/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:36.274 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:36.274 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:36.536 [555/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:36.536 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:36.536 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:36.536 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:36.797 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:37.057 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:37.057 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:37.326 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:37.326 [563/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:37.584 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:37.584 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:37.584 [566/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:37.584 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:37.584 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:37.584 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:37.846 [570/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.846 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:37.846 [572/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:37.846 [573/710] Linking target lib/librte_ethdev.so.24.0 00:01:37.846 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:37.846 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:38.109 [576/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:38.109 [577/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:38.109 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:38.109 [579/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:38.109 [580/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:38.109 [581/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:38.109 [582/710] Linking target lib/librte_metrics.so.24.0 00:01:38.109 [583/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:38.369 [584/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:38.369 [585/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:38.369 [586/710] Linking target lib/librte_bpf.so.24.0 00:01:38.370 [587/710] Linking target lib/librte_gro.so.24.0 00:01:38.370 [588/710] Linking target lib/librte_eventdev.so.24.0 00:01:38.370 [589/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:38.370 [590/710] Linking target lib/librte_gso.so.24.0 00:01:38.370 [591/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:38.370 [592/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:38.370 [593/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:38.631 [594/710] Linking target lib/librte_bitratestats.so.24.0 00:01:38.631 [595/710] Linking target lib/librte_ip_frag.so.24.0 00:01:38.631 [596/710] Linking static target lib/librte_pdcp.a 00:01:38.631 [597/710] Linking target lib/librte_latencystats.so.24.0 00:01:38.631 [598/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:38.631 [599/710] Linking target lib/librte_pcapng.so.24.0 00:01:38.631 [600/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:38.631 [601/710] Linking target lib/librte_power.so.24.0 00:01:38.631 [602/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:38.631 [603/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:38.631 [604/710] Linking target lib/librte_dispatcher.so.24.0 00:01:38.631 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:38.892 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:38.892 [607/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:38.892 [608/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:38.892 [609/710] Linking target lib/librte_port.so.24.0 00:01:38.892 [610/710] Linking target lib/librte_pdump.so.24.0 00:01:38.892 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:38.892 [612/710] Linking target lib/librte_graph.so.24.0 00:01:38.892 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:38.892 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:39.154 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:39.154 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:39.154 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:39.154 [618/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:39.154 [619/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.154 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:39.154 [621/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:39.154 [622/710] Linking target lib/librte_table.so.24.0 00:01:39.415 [623/710] Linking target lib/librte_pdcp.so.24.0 00:01:39.415 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:39.415 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:39.415 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:39.415 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:39.415 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:39.415 [629/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:39.415 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:39.984 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:39.984 [632/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:39.984 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:39.984 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:40.243 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:40.243 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:40.243 [637/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:40.243 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:40.243 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:40.243 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:40.243 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:40.502 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:40.502 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:40.503 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:40.503 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:40.761 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:40.761 [647/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:40.761 [648/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:40.761 [649/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:40.761 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:41.020 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:41.020 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:41.279 [653/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:41.279 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:41.279 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:41.279 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:41.538 [657/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:41.538 [658/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:41.538 [659/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:41.797 [660/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:41.797 [661/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:41.797 [662/710] Linking static target drivers/librte_net_i40e.a 00:01:41.797 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:41.797 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:42.054 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:42.054 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:42.054 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:42.313 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.313 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:42.313 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:42.878 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:43.136 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:43.136 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:43.136 [674/710] Linking static target lib/librte_node.a 00:01:43.394 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.652 [676/710] Linking target lib/librte_node.so.24.0 00:01:44.218 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:44.476 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:44.476 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:46.377 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:46.943 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:53.497 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.619 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.619 [684/710] Linking static target lib/librte_vhost.a 00:02:25.619 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.619 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:40.487 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:40.746 [688/710] Linking static target lib/librte_pipeline.a 00:02:41.314 [689/710] Linking target app/dpdk-proc-info 00:02:41.314 [690/710] Linking target app/dpdk-dumpcap 00:02:41.314 [691/710] Linking target app/dpdk-test-acl 00:02:41.314 [692/710] Linking target app/dpdk-pdump 00:02:41.314 [693/710] Linking target app/dpdk-test-cmdline 00:02:41.314 [694/710] Linking target app/dpdk-test-pipeline 00:02:41.314 [695/710] Linking target app/dpdk-test-sad 00:02:41.314 [696/710] Linking target app/dpdk-test-dma-perf 00:02:41.314 [697/710] Linking target app/dpdk-test-fib 00:02:41.314 [698/710] Linking target app/dpdk-test-regex 00:02:41.314 [699/710] Linking target app/dpdk-test-gpudev 00:02:41.314 [700/710] Linking target app/dpdk-test-mldev 00:02:41.314 [701/710] Linking target app/dpdk-graph 00:02:41.314 [702/710] Linking target app/dpdk-test-flow-perf 00:02:41.314 [703/710] Linking target app/dpdk-test-crypto-perf 00:02:41.314 [704/710] Linking target app/dpdk-test-security-perf 00:02:41.314 [705/710] Linking target app/dpdk-test-bbdev 00:02:41.314 [706/710] Linking target app/dpdk-test-compress-perf 00:02:41.314 [707/710] Linking target app/dpdk-test-eventdev 00:02:41.572 [708/710] Linking target app/dpdk-testpmd 00:02:43.474 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.731 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:43.732 11:14:43 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:43.732 11:14:43 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:43.732 11:14:43 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:43.732 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:43.732 [0/1] Installing files. 00:02:43.993 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.993 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.994 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.995 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.996 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.997 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.998 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:43.999 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:43.999 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.257 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.258 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.829 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.829 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.829 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.829 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.829 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:44.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:44.833 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:44.833 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:44.833 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:44.833 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:44.834 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:44.834 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:44.834 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:44.834 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:44.834 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:44.834 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:44.834 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:44.834 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:44.834 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:44.834 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:44.834 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:44.834 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:44.834 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:44.834 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:44.834 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:44.834 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:44.834 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:44.834 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:44.834 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:44.834 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:44.834 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:44.834 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:44.834 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:44.834 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:44.834 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:44.834 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:44.834 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:44.834 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:44.834 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:44.834 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:44.834 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:44.834 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:44.834 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:44.834 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:44.834 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:44.834 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:44.834 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:44.834 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:44.834 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:44.834 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:44.834 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:44.834 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:44.834 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:44.834 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:44.834 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:44.834 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:44.834 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:44.834 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:44.834 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:44.834 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:44.834 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:44.834 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:44.834 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:44.834 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:44.834 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:44.834 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:44.834 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:44.834 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:44.834 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:44.834 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:44.834 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:44.834 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:44.834 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:44.834 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:44.834 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:44.834 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:44.834 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:44.834 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:44.834 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:44.834 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:44.834 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:44.834 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:44.834 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:44.834 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:44.834 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:44.834 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:44.834 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:44.834 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:44.834 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:44.834 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:44.834 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:44.834 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:44.834 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:44.834 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:44.834 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:44.834 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:44.834 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:44.834 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:44.835 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:44.835 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:44.835 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:44.835 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:44.835 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:44.835 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:44.835 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:44.835 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:44.835 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:44.835 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:44.835 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:44.835 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:44.835 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:44.835 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:44.835 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:44.835 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:44.835 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:44.835 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:44.835 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:44.835 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:44.835 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:44.835 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:44.835 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:44.835 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:44.835 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:44.835 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:44.835 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:44.835 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:44.835 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:44.835 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:44.835 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:44.835 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:44.835 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:44.835 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:44.835 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:44.835 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:44.835 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:44.835 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:44.835 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:44.835 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:44.835 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:44.835 11:14:45 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:44.835 11:14:45 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.835 00:02:44.835 real 1m32.729s 00:02:44.835 user 18m9.292s 00:02:44.835 sys 2m10.472s 00:02:44.835 11:14:45 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:44.835 11:14:45 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:44.835 ************************************ 00:02:44.835 END TEST build_native_dpdk 00:02:44.835 ************************************ 00:02:45.093 11:14:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:45.093 11:14:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:45.093 11:14:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:45.093 11:14:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:45.093 11:14:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:45.093 11:14:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:45.093 11:14:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:45.093 11:14:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:45.093 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:45.093 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.093 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.093 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:45.351 Using 'verbs' RDMA provider 00:02:55.890 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:05.870 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:05.870 Creating mk/config.mk...done. 00:03:05.870 Creating mk/cc.flags.mk...done. 00:03:05.870 Type 'make' to build. 00:03:05.870 11:15:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:05.870 11:15:05 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:05.870 11:15:05 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:05.870 11:15:05 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.870 ************************************ 00:03:05.870 START TEST make 00:03:05.870 ************************************ 00:03:05.870 11:15:05 make -- common/autotest_common.sh@1127 -- $ make -j48 00:03:05.870 make[1]: Nothing to be done for 'all'. 00:03:07.253 The Meson build system 00:03:07.253 Version: 1.5.0 00:03:07.253 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:07.253 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.253 Build type: native build 00:03:07.253 Project name: libvfio-user 00:03:07.253 Project version: 0.0.1 00:03:07.253 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:07.253 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:07.253 Host machine cpu family: x86_64 00:03:07.253 Host machine cpu: x86_64 00:03:07.253 Run-time dependency threads found: YES 00:03:07.253 Library dl found: YES 00:03:07.253 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:07.253 Run-time dependency json-c found: YES 0.17 00:03:07.253 Run-time dependency cmocka found: YES 1.1.7 00:03:07.253 Program pytest-3 found: NO 00:03:07.253 Program flake8 found: NO 00:03:07.253 Program misspell-fixer found: NO 00:03:07.253 Program restructuredtext-lint found: NO 00:03:07.253 Program valgrind found: YES (/usr/bin/valgrind) 00:03:07.253 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.253 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.253 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.253 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.253 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:07.253 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:07.253 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.253 Build targets in project: 8 00:03:07.253 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:07.253 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:07.253 00:03:07.253 libvfio-user 0.0.1 00:03:07.253 00:03:07.253 User defined options 00:03:07.253 buildtype : debug 00:03:07.253 default_library: shared 00:03:07.253 libdir : /usr/local/lib 00:03:07.253 00:03:07.253 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.201 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.201 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:08.461 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.461 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:08.461 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:08.461 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.461 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:08.461 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.461 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:08.461 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:08.461 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.461 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.461 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.461 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.461 [14/37] Compiling C object samples/null.p/null.c.o 00:03:08.461 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.461 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.461 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.461 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.461 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.461 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:08.461 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.461 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.461 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.461 [24/37] Compiling C object samples/server.p/server.c.o 00:03:08.461 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.461 [26/37] Compiling C object samples/client.p/client.c.o 00:03:08.723 [27/37] Linking target samples/client 00:03:08.723 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:08.723 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.723 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.723 [31/37] Linking target test/unit_tests 00:03:08.989 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:08.989 [33/37] Linking target samples/server 00:03:08.989 [34/37] Linking target samples/null 00:03:08.989 [35/37] Linking target samples/lspci 00:03:08.989 [36/37] Linking target samples/gpio-pci-idio-16 00:03:08.989 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:08.989 INFO: autodetecting backend as ninja 00:03:08.989 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.989 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.934 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.934 ninja: no work to do. 00:03:48.646 CC lib/log/log.o 00:03:48.646 CC lib/log/log_flags.o 00:03:48.646 CC lib/ut_mock/mock.o 00:03:48.646 CC lib/ut/ut.o 00:03:48.646 CC lib/log/log_deprecated.o 00:03:48.646 LIB libspdk_ut.a 00:03:48.646 LIB libspdk_ut_mock.a 00:03:48.646 LIB libspdk_log.a 00:03:48.646 SO libspdk_ut.so.2.0 00:03:48.646 SO libspdk_ut_mock.so.6.0 00:03:48.646 SO libspdk_log.so.7.1 00:03:48.646 SYMLINK libspdk_ut.so 00:03:48.646 SYMLINK libspdk_ut_mock.so 00:03:48.646 SYMLINK libspdk_log.so 00:03:48.646 CXX lib/trace_parser/trace.o 00:03:48.646 CC lib/dma/dma.o 00:03:48.646 CC lib/ioat/ioat.o 00:03:48.646 CC lib/util/base64.o 00:03:48.646 CC lib/util/bit_array.o 00:03:48.646 CC lib/util/cpuset.o 00:03:48.646 CC lib/util/crc16.o 00:03:48.646 CC lib/util/crc32.o 00:03:48.646 CC lib/util/crc32c.o 00:03:48.646 CC lib/util/crc32_ieee.o 00:03:48.646 CC lib/util/crc64.o 00:03:48.646 CC lib/util/fd.o 00:03:48.646 CC lib/util/dif.o 00:03:48.646 CC lib/util/fd_group.o 00:03:48.646 CC lib/util/file.o 00:03:48.646 CC lib/util/hexlify.o 00:03:48.646 CC lib/util/iov.o 00:03:48.646 CC lib/util/math.o 00:03:48.646 CC lib/util/net.o 00:03:48.646 CC lib/util/pipe.o 00:03:48.646 CC lib/util/strerror_tls.o 00:03:48.646 CC lib/util/string.o 00:03:48.646 CC lib/util/uuid.o 00:03:48.646 CC lib/util/xor.o 00:03:48.646 CC lib/util/zipf.o 00:03:48.646 CC lib/util/md5.o 00:03:48.646 CC lib/vfio_user/host/vfio_user_pci.o 00:03:48.646 CC lib/vfio_user/host/vfio_user.o 00:03:48.646 LIB libspdk_dma.a 00:03:48.646 SO libspdk_dma.so.5.0 00:03:48.646 SYMLINK libspdk_dma.so 00:03:48.646 LIB libspdk_vfio_user.a 00:03:48.646 LIB libspdk_ioat.a 00:03:48.646 SO libspdk_vfio_user.so.5.0 00:03:48.646 SO libspdk_ioat.so.7.0 00:03:48.646 SYMLINK libspdk_vfio_user.so 00:03:48.646 SYMLINK libspdk_ioat.so 00:03:48.646 LIB libspdk_util.a 00:03:48.646 SO libspdk_util.so.10.0 00:03:48.646 SYMLINK libspdk_util.so 00:03:48.646 CC lib/rdma_utils/rdma_utils.o 00:03:48.646 CC lib/env_dpdk/env.o 00:03:48.646 CC lib/json/json_parse.o 00:03:48.646 CC lib/idxd/idxd.o 00:03:48.646 CC lib/rdma_provider/common.o 00:03:48.646 CC lib/vmd/vmd.o 00:03:48.646 CC lib/json/json_util.o 00:03:48.646 CC lib/env_dpdk/memory.o 00:03:48.646 CC lib/idxd/idxd_user.o 00:03:48.646 CC lib/env_dpdk/pci.o 00:03:48.646 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:48.646 CC lib/json/json_write.o 00:03:48.646 CC lib/vmd/led.o 00:03:48.646 CC lib/idxd/idxd_kernel.o 00:03:48.646 CC lib/env_dpdk/init.o 00:03:48.646 CC lib/env_dpdk/threads.o 00:03:48.646 CC lib/env_dpdk/pci_ioat.o 00:03:48.646 CC lib/conf/conf.o 00:03:48.646 CC lib/env_dpdk/pci_virtio.o 00:03:48.646 CC lib/env_dpdk/pci_vmd.o 00:03:48.646 CC lib/env_dpdk/pci_idxd.o 00:03:48.646 CC lib/env_dpdk/pci_event.o 00:03:48.646 CC lib/env_dpdk/pci_dpdk.o 00:03:48.646 CC lib/env_dpdk/sigbus_handler.o 00:03:48.646 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:48.646 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:48.646 LIB libspdk_trace_parser.a 00:03:48.646 SO libspdk_trace_parser.so.6.0 00:03:48.646 SYMLINK libspdk_trace_parser.so 00:03:48.646 LIB libspdk_rdma_provider.a 00:03:48.646 SO libspdk_rdma_provider.so.6.0 00:03:48.904 LIB libspdk_conf.a 00:03:48.904 SO libspdk_conf.so.6.0 00:03:48.904 SYMLINK libspdk_rdma_provider.so 00:03:48.904 LIB libspdk_rdma_utils.a 00:03:48.904 LIB libspdk_json.a 00:03:48.904 SYMLINK libspdk_conf.so 00:03:48.904 SO libspdk_rdma_utils.so.1.0 00:03:48.904 SO libspdk_json.so.6.0 00:03:48.904 SYMLINK libspdk_rdma_utils.so 00:03:48.904 SYMLINK libspdk_json.so 00:03:49.164 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.164 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.164 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.164 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.164 LIB libspdk_idxd.a 00:03:49.164 LIB libspdk_vmd.a 00:03:49.164 SO libspdk_idxd.so.12.1 00:03:49.164 SO libspdk_vmd.so.6.0 00:03:49.435 SYMLINK libspdk_vmd.so 00:03:49.435 SYMLINK libspdk_idxd.so 00:03:49.435 LIB libspdk_jsonrpc.a 00:03:49.435 SO libspdk_jsonrpc.so.6.0 00:03:49.435 SYMLINK libspdk_jsonrpc.so 00:03:49.717 CC lib/rpc/rpc.o 00:03:49.975 LIB libspdk_rpc.a 00:03:49.975 SO libspdk_rpc.so.6.0 00:03:49.975 SYMLINK libspdk_rpc.so 00:03:49.975 CC lib/trace/trace.o 00:03:49.975 CC lib/notify/notify.o 00:03:49.975 CC lib/trace/trace_flags.o 00:03:49.975 CC lib/notify/notify_rpc.o 00:03:49.975 CC lib/keyring/keyring.o 00:03:49.975 CC lib/trace/trace_rpc.o 00:03:49.975 CC lib/keyring/keyring_rpc.o 00:03:50.232 LIB libspdk_notify.a 00:03:50.232 SO libspdk_notify.so.6.0 00:03:50.232 LIB libspdk_keyring.a 00:03:50.232 SYMLINK libspdk_notify.so 00:03:50.232 LIB libspdk_trace.a 00:03:50.232 SO libspdk_keyring.so.2.0 00:03:50.491 SO libspdk_trace.so.11.0 00:03:50.491 SYMLINK libspdk_keyring.so 00:03:50.491 SYMLINK libspdk_trace.so 00:03:50.491 CC lib/sock/sock.o 00:03:50.491 CC lib/thread/thread.o 00:03:50.491 CC lib/sock/sock_rpc.o 00:03:50.491 CC lib/thread/iobuf.o 00:03:50.491 LIB libspdk_env_dpdk.a 00:03:50.749 SO libspdk_env_dpdk.so.15.1 00:03:50.749 SYMLINK libspdk_env_dpdk.so 00:03:51.007 LIB libspdk_sock.a 00:03:51.007 SO libspdk_sock.so.10.0 00:03:51.007 SYMLINK libspdk_sock.so 00:03:51.266 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.266 CC lib/nvme/nvme_ctrlr.o 00:03:51.266 CC lib/nvme/nvme_fabric.o 00:03:51.266 CC lib/nvme/nvme_ns_cmd.o 00:03:51.266 CC lib/nvme/nvme_ns.o 00:03:51.266 CC lib/nvme/nvme_pcie_common.o 00:03:51.266 CC lib/nvme/nvme_pcie.o 00:03:51.266 CC lib/nvme/nvme_qpair.o 00:03:51.266 CC lib/nvme/nvme.o 00:03:51.266 CC lib/nvme/nvme_quirks.o 00:03:51.266 CC lib/nvme/nvme_transport.o 00:03:51.266 CC lib/nvme/nvme_discovery.o 00:03:51.266 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.266 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.266 CC lib/nvme/nvme_tcp.o 00:03:51.266 CC lib/nvme/nvme_opal.o 00:03:51.266 CC lib/nvme/nvme_io_msg.o 00:03:51.266 CC lib/nvme/nvme_poll_group.o 00:03:51.266 CC lib/nvme/nvme_zns.o 00:03:51.266 CC lib/nvme/nvme_stubs.o 00:03:51.266 CC lib/nvme/nvme_auth.o 00:03:51.266 CC lib/nvme/nvme_cuse.o 00:03:51.266 CC lib/nvme/nvme_vfio_user.o 00:03:51.266 CC lib/nvme/nvme_rdma.o 00:03:52.201 LIB libspdk_thread.a 00:03:52.201 SO libspdk_thread.so.11.0 00:03:52.201 SYMLINK libspdk_thread.so 00:03:52.458 CC lib/init/json_config.o 00:03:52.458 CC lib/accel/accel.o 00:03:52.458 CC lib/fsdev/fsdev.o 00:03:52.459 CC lib/vfu_tgt/tgt_endpoint.o 00:03:52.459 CC lib/accel/accel_rpc.o 00:03:52.459 CC lib/init/subsystem.o 00:03:52.459 CC lib/fsdev/fsdev_io.o 00:03:52.459 CC lib/accel/accel_sw.o 00:03:52.459 CC lib/vfu_tgt/tgt_rpc.o 00:03:52.459 CC lib/fsdev/fsdev_rpc.o 00:03:52.459 CC lib/init/subsystem_rpc.o 00:03:52.459 CC lib/init/rpc.o 00:03:52.459 CC lib/virtio/virtio.o 00:03:52.459 CC lib/blob/blobstore.o 00:03:52.459 CC lib/virtio/virtio_vhost_user.o 00:03:52.459 CC lib/blob/request.o 00:03:52.459 CC lib/virtio/virtio_vfio_user.o 00:03:52.459 CC lib/blob/zeroes.o 00:03:52.459 CC lib/virtio/virtio_pci.o 00:03:52.459 CC lib/blob/blob_bs_dev.o 00:03:52.716 LIB libspdk_init.a 00:03:52.716 SO libspdk_init.so.6.0 00:03:52.716 SYMLINK libspdk_init.so 00:03:52.716 LIB libspdk_virtio.a 00:03:52.716 LIB libspdk_vfu_tgt.a 00:03:52.716 SO libspdk_vfu_tgt.so.3.0 00:03:52.716 SO libspdk_virtio.so.7.0 00:03:52.974 SYMLINK libspdk_vfu_tgt.so 00:03:52.974 SYMLINK libspdk_virtio.so 00:03:52.974 CC lib/event/app.o 00:03:52.974 CC lib/event/reactor.o 00:03:52.974 CC lib/event/log_rpc.o 00:03:52.974 CC lib/event/app_rpc.o 00:03:52.975 CC lib/event/scheduler_static.o 00:03:53.233 LIB libspdk_fsdev.a 00:03:53.233 SO libspdk_fsdev.so.2.0 00:03:53.233 SYMLINK libspdk_fsdev.so 00:03:53.491 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:53.491 LIB libspdk_event.a 00:03:53.491 SO libspdk_event.so.14.0 00:03:53.491 SYMLINK libspdk_event.so 00:03:53.749 LIB libspdk_accel.a 00:03:53.749 SO libspdk_accel.so.16.0 00:03:53.749 SYMLINK libspdk_accel.so 00:03:53.749 LIB libspdk_nvme.a 00:03:54.007 CC lib/bdev/bdev.o 00:03:54.007 CC lib/bdev/bdev_rpc.o 00:03:54.007 CC lib/bdev/bdev_zone.o 00:03:54.007 CC lib/bdev/part.o 00:03:54.007 CC lib/bdev/scsi_nvme.o 00:03:54.007 SO libspdk_nvme.so.14.1 00:03:54.007 LIB libspdk_fuse_dispatcher.a 00:03:54.007 SO libspdk_fuse_dispatcher.so.1.0 00:03:54.007 SYMLINK libspdk_fuse_dispatcher.so 00:03:54.265 SYMLINK libspdk_nvme.so 00:03:55.641 LIB libspdk_blob.a 00:03:55.641 SO libspdk_blob.so.11.0 00:03:55.641 SYMLINK libspdk_blob.so 00:03:55.899 CC lib/lvol/lvol.o 00:03:55.899 CC lib/blobfs/blobfs.o 00:03:55.899 CC lib/blobfs/tree.o 00:03:56.464 LIB libspdk_bdev.a 00:03:56.725 SO libspdk_bdev.so.17.0 00:03:56.725 SYMLINK libspdk_bdev.so 00:03:56.725 LIB libspdk_blobfs.a 00:03:56.725 SO libspdk_blobfs.so.10.0 00:03:56.725 SYMLINK libspdk_blobfs.so 00:03:56.725 CC lib/nbd/nbd.o 00:03:56.725 CC lib/nbd/nbd_rpc.o 00:03:56.725 CC lib/nvmf/ctrlr.o 00:03:56.725 CC lib/ublk/ublk.o 00:03:56.725 CC lib/scsi/dev.o 00:03:56.725 CC lib/scsi/lun.o 00:03:56.725 CC lib/ublk/ublk_rpc.o 00:03:56.725 CC lib/ftl/ftl_core.o 00:03:56.725 CC lib/nvmf/ctrlr_discovery.o 00:03:56.725 CC lib/scsi/port.o 00:03:56.725 CC lib/ftl/ftl_init.o 00:03:56.725 CC lib/nvmf/ctrlr_bdev.o 00:03:56.725 CC lib/scsi/scsi.o 00:03:56.725 CC lib/ftl/ftl_layout.o 00:03:56.725 CC lib/nvmf/subsystem.o 00:03:56.725 CC lib/ftl/ftl_debug.o 00:03:56.725 CC lib/scsi/scsi_bdev.o 00:03:56.725 CC lib/nvmf/nvmf.o 00:03:56.725 CC lib/ftl/ftl_io.o 00:03:56.725 CC lib/nvmf/nvmf_rpc.o 00:03:56.725 CC lib/ftl/ftl_sb.o 00:03:56.725 CC lib/scsi/scsi_pr.o 00:03:56.725 CC lib/scsi/scsi_rpc.o 00:03:56.725 CC lib/scsi/task.o 00:03:56.725 CC lib/nvmf/transport.o 00:03:56.725 CC lib/ftl/ftl_l2p.o 00:03:56.725 CC lib/ftl/ftl_l2p_flat.o 00:03:56.725 CC lib/nvmf/tcp.o 00:03:56.725 CC lib/ftl/ftl_nv_cache.o 00:03:56.725 CC lib/nvmf/stubs.o 00:03:56.725 CC lib/ftl/ftl_band.o 00:03:56.725 CC lib/nvmf/mdns_server.o 00:03:56.725 CC lib/nvmf/vfio_user.o 00:03:56.725 CC lib/ftl/ftl_band_ops.o 00:03:56.725 CC lib/ftl/ftl_writer.o 00:03:56.725 CC lib/nvmf/rdma.o 00:03:56.725 CC lib/ftl/ftl_rq.o 00:03:56.725 CC lib/nvmf/auth.o 00:03:56.725 CC lib/ftl/ftl_reloc.o 00:03:56.725 CC lib/ftl/ftl_l2p_cache.o 00:03:56.726 CC lib/ftl/ftl_p2l.o 00:03:56.992 CC lib/ftl/ftl_p2l_log.o 00:03:56.992 CC lib/ftl/mngt/ftl_mngt.o 00:03:56.992 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:56.992 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:56.992 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:56.992 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:56.992 LIB libspdk_lvol.a 00:03:56.992 SO libspdk_lvol.so.10.0 00:03:56.992 SYMLINK libspdk_lvol.so 00:03:56.992 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:57.251 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:57.251 CC lib/ftl/utils/ftl_conf.o 00:03:57.251 CC lib/ftl/utils/ftl_md.o 00:03:57.251 CC lib/ftl/utils/ftl_mempool.o 00:03:57.251 CC lib/ftl/utils/ftl_bitmap.o 00:03:57.251 CC lib/ftl/utils/ftl_property.o 00:03:57.251 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.512 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.512 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.512 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.512 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:57.512 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:57.512 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:57.512 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:57.512 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:57.512 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:57.512 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:57.512 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:57.512 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:57.513 CC lib/ftl/base/ftl_base_dev.o 00:03:57.513 CC lib/ftl/base/ftl_base_bdev.o 00:03:57.773 CC lib/ftl/ftl_trace.o 00:03:57.773 LIB libspdk_nbd.a 00:03:57.773 SO libspdk_nbd.so.7.0 00:03:57.773 LIB libspdk_scsi.a 00:03:57.773 SYMLINK libspdk_nbd.so 00:03:57.773 SO libspdk_scsi.so.9.0 00:03:58.031 SYMLINK libspdk_scsi.so 00:03:58.031 LIB libspdk_ublk.a 00:03:58.031 SO libspdk_ublk.so.3.0 00:03:58.031 SYMLINK libspdk_ublk.so 00:03:58.031 CC lib/vhost/vhost.o 00:03:58.031 CC lib/iscsi/conn.o 00:03:58.031 CC lib/vhost/vhost_rpc.o 00:03:58.031 CC lib/iscsi/init_grp.o 00:03:58.031 CC lib/vhost/vhost_scsi.o 00:03:58.031 CC lib/iscsi/iscsi.o 00:03:58.031 CC lib/iscsi/param.o 00:03:58.031 CC lib/vhost/vhost_blk.o 00:03:58.031 CC lib/iscsi/portal_grp.o 00:03:58.031 CC lib/vhost/rte_vhost_user.o 00:03:58.031 CC lib/iscsi/tgt_node.o 00:03:58.031 CC lib/iscsi/iscsi_subsystem.o 00:03:58.031 CC lib/iscsi/iscsi_rpc.o 00:03:58.031 CC lib/iscsi/task.o 00:03:58.290 LIB libspdk_ftl.a 00:03:58.548 SO libspdk_ftl.so.9.0 00:03:58.806 SYMLINK libspdk_ftl.so 00:03:59.371 LIB libspdk_vhost.a 00:03:59.371 SO libspdk_vhost.so.8.0 00:03:59.371 SYMLINK libspdk_vhost.so 00:03:59.629 LIB libspdk_nvmf.a 00:03:59.629 LIB libspdk_iscsi.a 00:03:59.629 SO libspdk_iscsi.so.8.0 00:03:59.629 SO libspdk_nvmf.so.20.0 00:03:59.629 SYMLINK libspdk_iscsi.so 00:03:59.887 SYMLINK libspdk_nvmf.so 00:04:00.145 CC module/vfu_device/vfu_virtio.o 00:04:00.145 CC module/vfu_device/vfu_virtio_blk.o 00:04:00.145 CC module/env_dpdk/env_dpdk_rpc.o 00:04:00.145 CC module/vfu_device/vfu_virtio_scsi.o 00:04:00.145 CC module/vfu_device/vfu_virtio_rpc.o 00:04:00.145 CC module/vfu_device/vfu_virtio_fs.o 00:04:00.145 CC module/keyring/linux/keyring.o 00:04:00.145 CC module/keyring/linux/keyring_rpc.o 00:04:00.145 CC module/accel/ioat/accel_ioat.o 00:04:00.145 CC module/blob/bdev/blob_bdev.o 00:04:00.145 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.145 CC module/accel/ioat/accel_ioat_rpc.o 00:04:00.145 CC module/accel/error/accel_error.o 00:04:00.145 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:00.145 CC module/accel/error/accel_error_rpc.o 00:04:00.145 CC module/scheduler/gscheduler/gscheduler.o 00:04:00.145 CC module/accel/dsa/accel_dsa.o 00:04:00.145 CC module/accel/dsa/accel_dsa_rpc.o 00:04:00.145 CC module/accel/iaa/accel_iaa.o 00:04:00.145 CC module/accel/iaa/accel_iaa_rpc.o 00:04:00.145 CC module/fsdev/aio/fsdev_aio.o 00:04:00.145 CC module/sock/posix/posix.o 00:04:00.145 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:00.145 CC module/fsdev/aio/linux_aio_mgr.o 00:04:00.145 CC module/keyring/file/keyring_rpc.o 00:04:00.145 CC module/keyring/file/keyring.o 00:04:00.145 LIB libspdk_env_dpdk_rpc.a 00:04:00.145 SO libspdk_env_dpdk_rpc.so.6.0 00:04:00.403 SYMLINK libspdk_env_dpdk_rpc.so 00:04:00.403 LIB libspdk_keyring_linux.a 00:04:00.403 LIB libspdk_accel_error.a 00:04:00.403 SO libspdk_keyring_linux.so.1.0 00:04:00.403 LIB libspdk_accel_ioat.a 00:04:00.403 LIB libspdk_scheduler_dynamic.a 00:04:00.403 SO libspdk_accel_error.so.2.0 00:04:00.403 LIB libspdk_accel_iaa.a 00:04:00.403 LIB libspdk_scheduler_gscheduler.a 00:04:00.403 SO libspdk_scheduler_dynamic.so.4.0 00:04:00.403 LIB libspdk_scheduler_dpdk_governor.a 00:04:00.403 SO libspdk_accel_ioat.so.6.0 00:04:00.403 SYMLINK libspdk_keyring_linux.so 00:04:00.403 LIB libspdk_keyring_file.a 00:04:00.403 SO libspdk_accel_iaa.so.3.0 00:04:00.403 SO libspdk_scheduler_gscheduler.so.4.0 00:04:00.403 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:00.403 SYMLINK libspdk_accel_error.so 00:04:00.403 SO libspdk_keyring_file.so.2.0 00:04:00.403 SYMLINK libspdk_scheduler_dynamic.so 00:04:00.403 SYMLINK libspdk_accel_ioat.so 00:04:00.403 LIB libspdk_blob_bdev.a 00:04:00.403 SYMLINK libspdk_scheduler_gscheduler.so 00:04:00.403 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:00.403 SYMLINK libspdk_accel_iaa.so 00:04:00.403 SO libspdk_blob_bdev.so.11.0 00:04:00.403 LIB libspdk_accel_dsa.a 00:04:00.403 SYMLINK libspdk_keyring_file.so 00:04:00.661 SO libspdk_accel_dsa.so.5.0 00:04:00.661 SYMLINK libspdk_blob_bdev.so 00:04:00.661 SYMLINK libspdk_accel_dsa.so 00:04:00.661 LIB libspdk_vfu_device.a 00:04:00.920 CC module/bdev/nvme/bdev_nvme.o 00:04:00.920 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:00.920 CC module/bdev/error/vbdev_error_rpc.o 00:04:00.920 CC module/bdev/nvme/nvme_rpc.o 00:04:00.920 CC module/bdev/error/vbdev_error.o 00:04:00.920 CC module/bdev/lvol/vbdev_lvol.o 00:04:00.920 CC module/bdev/malloc/bdev_malloc.o 00:04:00.920 CC module/bdev/null/bdev_null.o 00:04:00.920 CC module/blobfs/bdev/blobfs_bdev.o 00:04:00.920 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:00.920 CC module/bdev/null/bdev_null_rpc.o 00:04:00.920 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:00.920 CC module/bdev/nvme/vbdev_opal.o 00:04:00.920 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:00.920 CC module/bdev/nvme/bdev_mdns_client.o 00:04:00.920 CC module/bdev/gpt/gpt.o 00:04:00.920 CC module/bdev/passthru/vbdev_passthru.o 00:04:00.920 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:00.920 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:00.920 CC module/bdev/delay/vbdev_delay.o 00:04:00.920 CC module/bdev/gpt/vbdev_gpt.o 00:04:00.920 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:00.920 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:00.920 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:00.920 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:00.920 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:00.920 CC module/bdev/aio/bdev_aio.o 00:04:00.920 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:00.920 CC module/bdev/raid/bdev_raid.o 00:04:00.920 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:00.920 CC module/bdev/aio/bdev_aio_rpc.o 00:04:00.920 CC module/bdev/raid/bdev_raid_rpc.o 00:04:00.920 CC module/bdev/ftl/bdev_ftl.o 00:04:00.920 CC module/bdev/split/vbdev_split.o 00:04:00.920 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:00.920 CC module/bdev/raid/bdev_raid_sb.o 00:04:00.920 CC module/bdev/split/vbdev_split_rpc.o 00:04:00.920 CC module/bdev/raid/raid0.o 00:04:00.920 CC module/bdev/iscsi/bdev_iscsi.o 00:04:00.920 CC module/bdev/raid/raid1.o 00:04:00.920 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:00.920 CC module/bdev/raid/concat.o 00:04:00.920 SO libspdk_vfu_device.so.3.0 00:04:00.920 SYMLINK libspdk_vfu_device.so 00:04:00.920 LIB libspdk_fsdev_aio.a 00:04:01.178 SO libspdk_fsdev_aio.so.1.0 00:04:01.178 LIB libspdk_sock_posix.a 00:04:01.178 SYMLINK libspdk_fsdev_aio.so 00:04:01.178 SO libspdk_sock_posix.so.6.0 00:04:01.178 LIB libspdk_blobfs_bdev.a 00:04:01.178 SO libspdk_blobfs_bdev.so.6.0 00:04:01.178 LIB libspdk_bdev_split.a 00:04:01.178 SYMLINK libspdk_sock_posix.so 00:04:01.178 SO libspdk_bdev_split.so.6.0 00:04:01.436 SYMLINK libspdk_blobfs_bdev.so 00:04:01.436 LIB libspdk_bdev_null.a 00:04:01.436 LIB libspdk_bdev_error.a 00:04:01.436 SYMLINK libspdk_bdev_split.so 00:04:01.436 SO libspdk_bdev_null.so.6.0 00:04:01.436 SO libspdk_bdev_error.so.6.0 00:04:01.436 LIB libspdk_bdev_gpt.a 00:04:01.436 LIB libspdk_bdev_ftl.a 00:04:01.436 SO libspdk_bdev_gpt.so.6.0 00:04:01.436 SYMLINK libspdk_bdev_null.so 00:04:01.436 SO libspdk_bdev_ftl.so.6.0 00:04:01.436 SYMLINK libspdk_bdev_error.so 00:04:01.436 LIB libspdk_bdev_passthru.a 00:04:01.436 LIB libspdk_bdev_zone_block.a 00:04:01.436 SYMLINK libspdk_bdev_gpt.so 00:04:01.436 SO libspdk_bdev_passthru.so.6.0 00:04:01.436 LIB libspdk_bdev_malloc.a 00:04:01.436 SO libspdk_bdev_zone_block.so.6.0 00:04:01.436 LIB libspdk_bdev_aio.a 00:04:01.436 SYMLINK libspdk_bdev_ftl.so 00:04:01.436 SO libspdk_bdev_malloc.so.6.0 00:04:01.436 SO libspdk_bdev_aio.so.6.0 00:04:01.436 LIB libspdk_bdev_iscsi.a 00:04:01.436 SYMLINK libspdk_bdev_passthru.so 00:04:01.436 LIB libspdk_bdev_delay.a 00:04:01.436 SYMLINK libspdk_bdev_zone_block.so 00:04:01.436 SO libspdk_bdev_iscsi.so.6.0 00:04:01.436 SO libspdk_bdev_delay.so.6.0 00:04:01.436 SYMLINK libspdk_bdev_malloc.so 00:04:01.436 SYMLINK libspdk_bdev_aio.so 00:04:01.694 SYMLINK libspdk_bdev_iscsi.so 00:04:01.694 SYMLINK libspdk_bdev_delay.so 00:04:01.694 LIB libspdk_bdev_lvol.a 00:04:01.694 LIB libspdk_bdev_virtio.a 00:04:01.694 SO libspdk_bdev_lvol.so.6.0 00:04:01.694 SO libspdk_bdev_virtio.so.6.0 00:04:01.694 SYMLINK libspdk_bdev_lvol.so 00:04:01.694 SYMLINK libspdk_bdev_virtio.so 00:04:02.260 LIB libspdk_bdev_raid.a 00:04:02.260 SO libspdk_bdev_raid.so.6.0 00:04:02.260 SYMLINK libspdk_bdev_raid.so 00:04:03.633 LIB libspdk_bdev_nvme.a 00:04:03.633 SO libspdk_bdev_nvme.so.7.1 00:04:03.633 SYMLINK libspdk_bdev_nvme.so 00:04:03.891 CC module/event/subsystems/keyring/keyring.o 00:04:03.891 CC module/event/subsystems/iobuf/iobuf.o 00:04:03.891 CC module/event/subsystems/vmd/vmd.o 00:04:03.891 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:03.891 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:03.891 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:03.891 CC module/event/subsystems/sock/sock.o 00:04:03.891 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:03.891 CC module/event/subsystems/fsdev/fsdev.o 00:04:03.891 CC module/event/subsystems/scheduler/scheduler.o 00:04:04.150 LIB libspdk_event_keyring.a 00:04:04.150 LIB libspdk_event_vhost_blk.a 00:04:04.150 LIB libspdk_event_vfu_tgt.a 00:04:04.150 LIB libspdk_event_fsdev.a 00:04:04.150 LIB libspdk_event_scheduler.a 00:04:04.150 LIB libspdk_event_vmd.a 00:04:04.150 LIB libspdk_event_sock.a 00:04:04.150 SO libspdk_event_keyring.so.1.0 00:04:04.150 SO libspdk_event_vhost_blk.so.3.0 00:04:04.150 SO libspdk_event_vfu_tgt.so.3.0 00:04:04.150 LIB libspdk_event_iobuf.a 00:04:04.150 SO libspdk_event_scheduler.so.4.0 00:04:04.150 SO libspdk_event_fsdev.so.1.0 00:04:04.150 SO libspdk_event_vmd.so.6.0 00:04:04.150 SO libspdk_event_sock.so.5.0 00:04:04.150 SO libspdk_event_iobuf.so.3.0 00:04:04.150 SYMLINK libspdk_event_keyring.so 00:04:04.150 SYMLINK libspdk_event_vhost_blk.so 00:04:04.150 SYMLINK libspdk_event_vfu_tgt.so 00:04:04.150 SYMLINK libspdk_event_fsdev.so 00:04:04.150 SYMLINK libspdk_event_scheduler.so 00:04:04.150 SYMLINK libspdk_event_sock.so 00:04:04.150 SYMLINK libspdk_event_vmd.so 00:04:04.150 SYMLINK libspdk_event_iobuf.so 00:04:04.408 CC module/event/subsystems/accel/accel.o 00:04:04.408 LIB libspdk_event_accel.a 00:04:04.666 SO libspdk_event_accel.so.6.0 00:04:04.666 SYMLINK libspdk_event_accel.so 00:04:04.666 CC module/event/subsystems/bdev/bdev.o 00:04:04.924 LIB libspdk_event_bdev.a 00:04:04.924 SO libspdk_event_bdev.so.6.0 00:04:04.924 SYMLINK libspdk_event_bdev.so 00:04:05.182 CC module/event/subsystems/scsi/scsi.o 00:04:05.182 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:05.182 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:05.182 CC module/event/subsystems/ublk/ublk.o 00:04:05.182 CC module/event/subsystems/nbd/nbd.o 00:04:05.441 LIB libspdk_event_nbd.a 00:04:05.441 LIB libspdk_event_ublk.a 00:04:05.441 LIB libspdk_event_scsi.a 00:04:05.441 SO libspdk_event_ublk.so.3.0 00:04:05.441 SO libspdk_event_nbd.so.6.0 00:04:05.441 SO libspdk_event_scsi.so.6.0 00:04:05.441 SYMLINK libspdk_event_nbd.so 00:04:05.441 SYMLINK libspdk_event_ublk.so 00:04:05.441 SYMLINK libspdk_event_scsi.so 00:04:05.441 LIB libspdk_event_nvmf.a 00:04:05.441 SO libspdk_event_nvmf.so.6.0 00:04:05.441 SYMLINK libspdk_event_nvmf.so 00:04:05.700 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:05.700 CC module/event/subsystems/iscsi/iscsi.o 00:04:05.700 LIB libspdk_event_vhost_scsi.a 00:04:05.700 SO libspdk_event_vhost_scsi.so.3.0 00:04:05.700 LIB libspdk_event_iscsi.a 00:04:05.700 SO libspdk_event_iscsi.so.6.0 00:04:05.700 SYMLINK libspdk_event_vhost_scsi.so 00:04:05.958 SYMLINK libspdk_event_iscsi.so 00:04:05.958 SO libspdk.so.6.0 00:04:05.958 SYMLINK libspdk.so 00:04:06.221 CC test/rpc_client/rpc_client_test.o 00:04:06.221 TEST_HEADER include/spdk/accel.h 00:04:06.221 TEST_HEADER include/spdk/accel_module.h 00:04:06.221 TEST_HEADER include/spdk/assert.h 00:04:06.221 CC app/trace_record/trace_record.o 00:04:06.221 TEST_HEADER include/spdk/barrier.h 00:04:06.221 TEST_HEADER include/spdk/base64.h 00:04:06.221 CXX app/trace/trace.o 00:04:06.221 TEST_HEADER include/spdk/bdev.h 00:04:06.221 TEST_HEADER include/spdk/bdev_module.h 00:04:06.221 CC app/spdk_nvme_identify/identify.o 00:04:06.221 TEST_HEADER include/spdk/bdev_zone.h 00:04:06.221 CC app/spdk_lspci/spdk_lspci.o 00:04:06.221 CC app/spdk_top/spdk_top.o 00:04:06.221 TEST_HEADER include/spdk/bit_array.h 00:04:06.221 TEST_HEADER include/spdk/bit_pool.h 00:04:06.221 CC app/spdk_nvme_discover/discovery_aer.o 00:04:06.221 TEST_HEADER include/spdk/blob_bdev.h 00:04:06.221 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:06.221 TEST_HEADER include/spdk/blobfs.h 00:04:06.221 TEST_HEADER include/spdk/blob.h 00:04:06.221 TEST_HEADER include/spdk/conf.h 00:04:06.221 CC app/spdk_nvme_perf/perf.o 00:04:06.221 TEST_HEADER include/spdk/config.h 00:04:06.221 TEST_HEADER include/spdk/cpuset.h 00:04:06.221 TEST_HEADER include/spdk/crc16.h 00:04:06.221 TEST_HEADER include/spdk/crc32.h 00:04:06.221 TEST_HEADER include/spdk/crc64.h 00:04:06.221 TEST_HEADER include/spdk/dma.h 00:04:06.221 TEST_HEADER include/spdk/dif.h 00:04:06.221 TEST_HEADER include/spdk/endian.h 00:04:06.221 TEST_HEADER include/spdk/env_dpdk.h 00:04:06.221 TEST_HEADER include/spdk/env.h 00:04:06.221 TEST_HEADER include/spdk/event.h 00:04:06.221 TEST_HEADER include/spdk/fd_group.h 00:04:06.221 TEST_HEADER include/spdk/fd.h 00:04:06.221 TEST_HEADER include/spdk/file.h 00:04:06.221 TEST_HEADER include/spdk/fsdev.h 00:04:06.221 TEST_HEADER include/spdk/fsdev_module.h 00:04:06.221 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:06.221 TEST_HEADER include/spdk/ftl.h 00:04:06.221 TEST_HEADER include/spdk/gpt_spec.h 00:04:06.221 TEST_HEADER include/spdk/hexlify.h 00:04:06.221 TEST_HEADER include/spdk/histogram_data.h 00:04:06.221 TEST_HEADER include/spdk/idxd.h 00:04:06.221 TEST_HEADER include/spdk/idxd_spec.h 00:04:06.221 TEST_HEADER include/spdk/init.h 00:04:06.221 TEST_HEADER include/spdk/ioat.h 00:04:06.221 TEST_HEADER include/spdk/ioat_spec.h 00:04:06.221 TEST_HEADER include/spdk/json.h 00:04:06.221 TEST_HEADER include/spdk/iscsi_spec.h 00:04:06.221 TEST_HEADER include/spdk/jsonrpc.h 00:04:06.221 TEST_HEADER include/spdk/keyring.h 00:04:06.221 TEST_HEADER include/spdk/keyring_module.h 00:04:06.221 TEST_HEADER include/spdk/likely.h 00:04:06.221 TEST_HEADER include/spdk/log.h 00:04:06.221 TEST_HEADER include/spdk/lvol.h 00:04:06.221 TEST_HEADER include/spdk/md5.h 00:04:06.221 TEST_HEADER include/spdk/memory.h 00:04:06.221 TEST_HEADER include/spdk/mmio.h 00:04:06.221 TEST_HEADER include/spdk/nbd.h 00:04:06.221 TEST_HEADER include/spdk/net.h 00:04:06.221 TEST_HEADER include/spdk/notify.h 00:04:06.221 TEST_HEADER include/spdk/nvme_intel.h 00:04:06.221 TEST_HEADER include/spdk/nvme.h 00:04:06.221 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:06.221 TEST_HEADER include/spdk/nvme_spec.h 00:04:06.221 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:06.221 TEST_HEADER include/spdk/nvme_zns.h 00:04:06.221 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:06.221 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:06.221 TEST_HEADER include/spdk/nvmf.h 00:04:06.221 TEST_HEADER include/spdk/nvmf_spec.h 00:04:06.221 TEST_HEADER include/spdk/nvmf_transport.h 00:04:06.221 TEST_HEADER include/spdk/opal.h 00:04:06.221 TEST_HEADER include/spdk/opal_spec.h 00:04:06.221 TEST_HEADER include/spdk/pci_ids.h 00:04:06.221 TEST_HEADER include/spdk/pipe.h 00:04:06.221 TEST_HEADER include/spdk/queue.h 00:04:06.221 TEST_HEADER include/spdk/reduce.h 00:04:06.221 TEST_HEADER include/spdk/rpc.h 00:04:06.221 TEST_HEADER include/spdk/scheduler.h 00:04:06.221 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:06.221 TEST_HEADER include/spdk/scsi.h 00:04:06.221 TEST_HEADER include/spdk/scsi_spec.h 00:04:06.221 TEST_HEADER include/spdk/sock.h 00:04:06.221 TEST_HEADER include/spdk/stdinc.h 00:04:06.221 TEST_HEADER include/spdk/string.h 00:04:06.221 TEST_HEADER include/spdk/thread.h 00:04:06.221 TEST_HEADER include/spdk/trace.h 00:04:06.221 TEST_HEADER include/spdk/trace_parser.h 00:04:06.221 TEST_HEADER include/spdk/tree.h 00:04:06.221 TEST_HEADER include/spdk/ublk.h 00:04:06.221 TEST_HEADER include/spdk/util.h 00:04:06.221 TEST_HEADER include/spdk/uuid.h 00:04:06.221 TEST_HEADER include/spdk/version.h 00:04:06.221 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:06.221 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:06.221 TEST_HEADER include/spdk/vhost.h 00:04:06.221 TEST_HEADER include/spdk/vmd.h 00:04:06.221 TEST_HEADER include/spdk/xor.h 00:04:06.221 TEST_HEADER include/spdk/zipf.h 00:04:06.221 CXX test/cpp_headers/accel.o 00:04:06.221 CXX test/cpp_headers/accel_module.o 00:04:06.221 CXX test/cpp_headers/assert.o 00:04:06.221 CC app/spdk_dd/spdk_dd.o 00:04:06.221 CXX test/cpp_headers/barrier.o 00:04:06.221 CXX test/cpp_headers/base64.o 00:04:06.221 CXX test/cpp_headers/bdev.o 00:04:06.221 CXX test/cpp_headers/bdev_module.o 00:04:06.221 CXX test/cpp_headers/bdev_zone.o 00:04:06.221 CXX test/cpp_headers/bit_array.o 00:04:06.221 CXX test/cpp_headers/bit_pool.o 00:04:06.221 CXX test/cpp_headers/blob_bdev.o 00:04:06.221 CXX test/cpp_headers/blobfs_bdev.o 00:04:06.221 CXX test/cpp_headers/blobfs.o 00:04:06.221 CXX test/cpp_headers/blob.o 00:04:06.221 CXX test/cpp_headers/conf.o 00:04:06.221 CXX test/cpp_headers/config.o 00:04:06.221 CXX test/cpp_headers/cpuset.o 00:04:06.221 CXX test/cpp_headers/crc16.o 00:04:06.221 CC app/nvmf_tgt/nvmf_main.o 00:04:06.221 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.221 CXX test/cpp_headers/crc32.o 00:04:06.221 CC examples/ioat/perf/perf.o 00:04:06.221 CC examples/util/zipf/zipf.o 00:04:06.221 CC test/env/pci/pci_ut.o 00:04:06.221 CC test/app/histogram_perf/histogram_perf.o 00:04:06.221 CC test/env/memory/memory_ut.o 00:04:06.221 CC test/app/stub/stub.o 00:04:06.221 CC app/spdk_tgt/spdk_tgt.o 00:04:06.221 CC examples/ioat/verify/verify.o 00:04:06.221 CC test/app/jsoncat/jsoncat.o 00:04:06.221 CC test/thread/poller_perf/poller_perf.o 00:04:06.221 CC test/env/vtophys/vtophys.o 00:04:06.221 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:06.221 CC app/fio/nvme/fio_plugin.o 00:04:06.486 CC test/dma/test_dma/test_dma.o 00:04:06.486 CC test/app/bdev_svc/bdev_svc.o 00:04:06.486 CC app/fio/bdev/fio_plugin.o 00:04:06.486 CC test/env/mem_callbacks/mem_callbacks.o 00:04:06.486 LINK spdk_lspci 00:04:06.486 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:06.486 LINK rpc_client_test 00:04:06.748 LINK spdk_nvme_discover 00:04:06.748 LINK interrupt_tgt 00:04:06.748 LINK jsoncat 00:04:06.748 CXX test/cpp_headers/crc64.o 00:04:06.748 LINK histogram_perf 00:04:06.748 CXX test/cpp_headers/dif.o 00:04:06.748 LINK vtophys 00:04:06.748 LINK poller_perf 00:04:06.748 LINK zipf 00:04:06.748 CXX test/cpp_headers/dma.o 00:04:06.748 LINK nvmf_tgt 00:04:06.748 LINK env_dpdk_post_init 00:04:06.748 LINK stub 00:04:06.748 CXX test/cpp_headers/endian.o 00:04:06.748 CXX test/cpp_headers/env_dpdk.o 00:04:06.748 CXX test/cpp_headers/env.o 00:04:06.748 CXX test/cpp_headers/event.o 00:04:06.748 CXX test/cpp_headers/fd_group.o 00:04:06.748 CXX test/cpp_headers/fd.o 00:04:06.748 CXX test/cpp_headers/file.o 00:04:06.748 CXX test/cpp_headers/fsdev.o 00:04:06.748 CXX test/cpp_headers/fsdev_module.o 00:04:06.748 CXX test/cpp_headers/ftl.o 00:04:06.748 CXX test/cpp_headers/fuse_dispatcher.o 00:04:06.748 LINK iscsi_tgt 00:04:06.748 CXX test/cpp_headers/gpt_spec.o 00:04:06.748 LINK spdk_trace_record 00:04:06.748 CXX test/cpp_headers/hexlify.o 00:04:06.748 LINK ioat_perf 00:04:06.748 LINK bdev_svc 00:04:06.748 LINK verify 00:04:06.748 CXX test/cpp_headers/histogram_data.o 00:04:06.748 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:06.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:06.748 LINK spdk_tgt 00:04:07.012 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:07.012 CXX test/cpp_headers/idxd.o 00:04:07.012 CXX test/cpp_headers/idxd_spec.o 00:04:07.012 CXX test/cpp_headers/init.o 00:04:07.012 CXX test/cpp_headers/ioat.o 00:04:07.012 CXX test/cpp_headers/ioat_spec.o 00:04:07.012 LINK spdk_dd 00:04:07.012 CXX test/cpp_headers/iscsi_spec.o 00:04:07.012 CXX test/cpp_headers/json.o 00:04:07.012 LINK spdk_trace 00:04:07.012 CXX test/cpp_headers/jsonrpc.o 00:04:07.012 CXX test/cpp_headers/keyring.o 00:04:07.012 CXX test/cpp_headers/keyring_module.o 00:04:07.012 CXX test/cpp_headers/likely.o 00:04:07.012 CXX test/cpp_headers/log.o 00:04:07.012 CXX test/cpp_headers/lvol.o 00:04:07.012 CXX test/cpp_headers/md5.o 00:04:07.012 CXX test/cpp_headers/memory.o 00:04:07.012 CXX test/cpp_headers/mmio.o 00:04:07.277 LINK pci_ut 00:04:07.277 CXX test/cpp_headers/nbd.o 00:04:07.277 CXX test/cpp_headers/net.o 00:04:07.277 CXX test/cpp_headers/notify.o 00:04:07.277 CXX test/cpp_headers/nvme.o 00:04:07.277 CXX test/cpp_headers/nvme_intel.o 00:04:07.277 CXX test/cpp_headers/nvme_ocssd.o 00:04:07.277 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:07.277 CXX test/cpp_headers/nvme_spec.o 00:04:07.277 CXX test/cpp_headers/nvme_zns.o 00:04:07.277 CXX test/cpp_headers/nvmf_cmd.o 00:04:07.277 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:07.277 CC test/event/reactor_perf/reactor_perf.o 00:04:07.277 CC examples/sock/hello_world/hello_sock.o 00:04:07.277 LINK nvme_fuzz 00:04:07.277 CC test/event/reactor/reactor.o 00:04:07.277 CC test/event/event_perf/event_perf.o 00:04:07.277 CXX test/cpp_headers/nvmf.o 00:04:07.277 CC examples/thread/thread/thread_ex.o 00:04:07.277 CXX test/cpp_headers/nvmf_spec.o 00:04:07.277 CC examples/vmd/lsvmd/lsvmd.o 00:04:07.277 CXX test/cpp_headers/nvmf_transport.o 00:04:07.538 CXX test/cpp_headers/opal.o 00:04:07.538 CC examples/vmd/led/led.o 00:04:07.538 LINK test_dma 00:04:07.538 LINK spdk_nvme 00:04:07.538 CXX test/cpp_headers/opal_spec.o 00:04:07.538 CC examples/idxd/perf/perf.o 00:04:07.538 LINK spdk_bdev 00:04:07.538 CC test/event/app_repeat/app_repeat.o 00:04:07.538 CXX test/cpp_headers/pci_ids.o 00:04:07.538 CXX test/cpp_headers/pipe.o 00:04:07.538 CXX test/cpp_headers/queue.o 00:04:07.538 CXX test/cpp_headers/reduce.o 00:04:07.538 CXX test/cpp_headers/rpc.o 00:04:07.538 CXX test/cpp_headers/scheduler.o 00:04:07.538 CXX test/cpp_headers/scsi.o 00:04:07.538 CXX test/cpp_headers/scsi_spec.o 00:04:07.538 CXX test/cpp_headers/sock.o 00:04:07.538 CXX test/cpp_headers/stdinc.o 00:04:07.538 CXX test/cpp_headers/string.o 00:04:07.538 CXX test/cpp_headers/thread.o 00:04:07.538 CXX test/cpp_headers/trace.o 00:04:07.538 CXX test/cpp_headers/trace_parser.o 00:04:07.538 CXX test/cpp_headers/tree.o 00:04:07.538 CC test/event/scheduler/scheduler.o 00:04:07.538 CXX test/cpp_headers/ublk.o 00:04:07.538 CXX test/cpp_headers/util.o 00:04:07.538 CXX test/cpp_headers/uuid.o 00:04:07.538 CXX test/cpp_headers/version.o 00:04:07.538 CXX test/cpp_headers/vfio_user_pci.o 00:04:07.798 CC app/vhost/vhost.o 00:04:07.798 CXX test/cpp_headers/vfio_user_spec.o 00:04:07.798 CXX test/cpp_headers/vhost.o 00:04:07.798 CXX test/cpp_headers/vmd.o 00:04:07.798 LINK reactor_perf 00:04:07.798 CXX test/cpp_headers/xor.o 00:04:07.798 LINK reactor 00:04:07.798 LINK event_perf 00:04:07.798 LINK lsvmd 00:04:07.798 LINK mem_callbacks 00:04:07.798 LINK spdk_nvme_perf 00:04:07.798 CXX test/cpp_headers/zipf.o 00:04:07.798 LINK vhost_fuzz 00:04:07.798 LINK led 00:04:07.798 LINK app_repeat 00:04:07.798 LINK spdk_nvme_identify 00:04:07.798 LINK hello_sock 00:04:07.798 LINK thread 00:04:08.057 LINK spdk_top 00:04:08.057 CC test/nvme/startup/startup.o 00:04:08.057 CC test/nvme/reset/reset.o 00:04:08.057 CC test/nvme/aer/aer.o 00:04:08.057 CC test/nvme/err_injection/err_injection.o 00:04:08.057 CC test/nvme/sgl/sgl.o 00:04:08.057 CC test/nvme/reserve/reserve.o 00:04:08.057 CC test/nvme/overhead/overhead.o 00:04:08.057 CC test/nvme/e2edp/nvme_dp.o 00:04:08.057 CC test/nvme/connect_stress/connect_stress.o 00:04:08.057 CC test/nvme/simple_copy/simple_copy.o 00:04:08.057 LINK vhost 00:04:08.057 CC test/nvme/compliance/nvme_compliance.o 00:04:08.057 CC test/nvme/boot_partition/boot_partition.o 00:04:08.057 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.057 CC test/nvme/fdp/fdp.o 00:04:08.057 LINK scheduler 00:04:08.057 LINK idxd_perf 00:04:08.057 CC test/nvme/cuse/cuse.o 00:04:08.057 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.057 CC test/blobfs/mkfs/mkfs.o 00:04:08.057 CC test/accel/dif/dif.o 00:04:08.057 CC test/lvol/esnap/esnap.o 00:04:08.315 CC examples/nvme/abort/abort.o 00:04:08.315 CC examples/nvme/arbitration/arbitration.o 00:04:08.315 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:08.315 CC examples/nvme/hotplug/hotplug.o 00:04:08.315 CC examples/nvme/reconnect/reconnect.o 00:04:08.315 CC examples/nvme/hello_world/hello_world.o 00:04:08.315 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:08.315 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:08.315 LINK startup 00:04:08.315 LINK err_injection 00:04:08.315 LINK boot_partition 00:04:08.315 LINK doorbell_aers 00:04:08.315 LINK connect_stress 00:04:08.315 LINK simple_copy 00:04:08.315 LINK fused_ordering 00:04:08.574 CC examples/accel/perf/accel_perf.o 00:04:08.574 LINK aer 00:04:08.574 CC examples/blob/hello_world/hello_blob.o 00:04:08.574 LINK overhead 00:04:08.574 LINK reserve 00:04:08.574 LINK mkfs 00:04:08.574 LINK sgl 00:04:08.574 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:08.574 CC examples/blob/cli/blobcli.o 00:04:08.574 LINK reset 00:04:08.574 LINK nvme_compliance 00:04:08.574 LINK pmr_persistence 00:04:08.574 LINK nvme_dp 00:04:08.574 LINK cmb_copy 00:04:08.574 LINK hotplug 00:04:08.832 LINK hello_world 00:04:08.832 LINK fdp 00:04:08.832 LINK memory_ut 00:04:08.832 LINK hello_blob 00:04:08.832 LINK arbitration 00:04:08.832 LINK abort 00:04:08.832 LINK reconnect 00:04:08.832 LINK hello_fsdev 00:04:08.832 LINK dif 00:04:09.090 LINK nvme_manage 00:04:09.090 LINK accel_perf 00:04:09.090 LINK blobcli 00:04:09.349 LINK iscsi_fuzz 00:04:09.349 CC test/bdev/bdevio/bdevio.o 00:04:09.349 CC examples/bdev/hello_world/hello_bdev.o 00:04:09.349 CC examples/bdev/bdevperf/bdevperf.o 00:04:09.607 LINK hello_bdev 00:04:09.865 LINK bdevio 00:04:09.865 LINK cuse 00:04:10.123 LINK bdevperf 00:04:10.690 CC examples/nvmf/nvmf/nvmf.o 00:04:10.947 LINK nvmf 00:04:13.541 LINK esnap 00:04:13.800 00:04:13.800 real 1m8.500s 00:04:13.800 user 9m6.294s 00:04:13.800 sys 2m0.495s 00:04:13.800 11:16:13 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:13.800 11:16:13 make -- common/autotest_common.sh@10 -- $ set +x 00:04:13.800 ************************************ 00:04:13.800 END TEST make 00:04:13.800 ************************************ 00:04:13.800 11:16:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:13.800 11:16:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:13.800 11:16:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:13.800 11:16:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.800 11:16:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:13.800 11:16:13 -- pm/common@44 -- $ pid=3585431 00:04:13.800 11:16:13 -- pm/common@50 -- $ kill -TERM 3585431 00:04:13.800 11:16:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.800 11:16:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:13.800 11:16:13 -- pm/common@44 -- $ pid=3585433 00:04:13.800 11:16:13 -- pm/common@50 -- $ kill -TERM 3585433 00:04:13.800 11:16:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.800 11:16:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:13.800 11:16:13 -- pm/common@44 -- $ pid=3585435 00:04:13.800 11:16:13 -- pm/common@50 -- $ kill -TERM 3585435 00:04:13.800 11:16:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.800 11:16:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:13.800 11:16:13 -- pm/common@44 -- $ pid=3585466 00:04:13.800 11:16:13 -- pm/common@50 -- $ sudo -E kill -TERM 3585466 00:04:13.800 11:16:14 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:13.800 11:16:14 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:13.800 11:16:14 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.800 11:16:14 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.800 11:16:14 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.800 11:16:14 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.800 11:16:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.800 11:16:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.800 11:16:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.800 11:16:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.800 11:16:14 -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.800 11:16:14 -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.800 11:16:14 -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.800 11:16:14 -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.800 11:16:14 -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.800 11:16:14 -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.800 11:16:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.800 11:16:14 -- scripts/common.sh@344 -- # case "$op" in 00:04:13.800 11:16:14 -- scripts/common.sh@345 -- # : 1 00:04:13.800 11:16:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.800 11:16:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.800 11:16:14 -- scripts/common.sh@365 -- # decimal 1 00:04:13.800 11:16:14 -- scripts/common.sh@353 -- # local d=1 00:04:13.800 11:16:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.800 11:16:14 -- scripts/common.sh@355 -- # echo 1 00:04:13.800 11:16:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.800 11:16:14 -- scripts/common.sh@366 -- # decimal 2 00:04:13.800 11:16:14 -- scripts/common.sh@353 -- # local d=2 00:04:13.800 11:16:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.800 11:16:14 -- scripts/common.sh@355 -- # echo 2 00:04:13.800 11:16:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.800 11:16:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.800 11:16:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.800 11:16:14 -- scripts/common.sh@368 -- # return 0 00:04:13.800 11:16:14 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.800 11:16:14 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.800 --rc genhtml_branch_coverage=1 00:04:13.800 --rc genhtml_function_coverage=1 00:04:13.800 --rc genhtml_legend=1 00:04:13.800 --rc geninfo_all_blocks=1 00:04:13.800 --rc geninfo_unexecuted_blocks=1 00:04:13.800 00:04:13.800 ' 00:04:13.800 11:16:14 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.800 --rc genhtml_branch_coverage=1 00:04:13.800 --rc genhtml_function_coverage=1 00:04:13.800 --rc genhtml_legend=1 00:04:13.800 --rc geninfo_all_blocks=1 00:04:13.800 --rc geninfo_unexecuted_blocks=1 00:04:13.800 00:04:13.800 ' 00:04:13.800 11:16:14 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.800 --rc genhtml_branch_coverage=1 00:04:13.800 --rc genhtml_function_coverage=1 00:04:13.800 --rc genhtml_legend=1 00:04:13.800 --rc geninfo_all_blocks=1 00:04:13.800 --rc geninfo_unexecuted_blocks=1 00:04:13.800 00:04:13.800 ' 00:04:13.800 11:16:14 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.800 --rc genhtml_branch_coverage=1 00:04:13.800 --rc genhtml_function_coverage=1 00:04:13.800 --rc genhtml_legend=1 00:04:13.800 --rc geninfo_all_blocks=1 00:04:13.800 --rc geninfo_unexecuted_blocks=1 00:04:13.800 00:04:13.800 ' 00:04:13.801 11:16:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:13.801 11:16:14 -- nvmf/common.sh@7 -- # uname -s 00:04:13.801 11:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.801 11:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.801 11:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.801 11:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.801 11:16:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.801 11:16:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.801 11:16:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.801 11:16:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.801 11:16:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.801 11:16:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.801 11:16:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.801 11:16:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.801 11:16:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.801 11:16:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.801 11:16:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:13.801 11:16:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.801 11:16:14 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:13.801 11:16:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:13.801 11:16:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.801 11:16:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.801 11:16:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.801 11:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.801 11:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.801 11:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.801 11:16:14 -- paths/export.sh@5 -- # export PATH 00:04:13.801 11:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.801 11:16:14 -- nvmf/common.sh@51 -- # : 0 00:04:13.801 11:16:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:13.801 11:16:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:13.801 11:16:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.801 11:16:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.801 11:16:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.801 11:16:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:13.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:13.801 11:16:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:13.801 11:16:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:13.801 11:16:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:13.801 11:16:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:13.801 11:16:14 -- spdk/autotest.sh@32 -- # uname -s 00:04:13.801 11:16:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:13.801 11:16:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:13.801 11:16:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:13.801 11:16:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:13.801 11:16:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:13.801 11:16:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:13.801 11:16:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:13.801 11:16:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:13.801 11:16:14 -- spdk/autotest.sh@48 -- # udevadm_pid=3667382 00:04:13.801 11:16:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:13.801 11:16:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:13.801 11:16:14 -- pm/common@17 -- # local monitor 00:04:13.801 11:16:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.801 11:16:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.801 11:16:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.801 11:16:14 -- pm/common@21 -- # date +%s 00:04:13.801 11:16:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.801 11:16:14 -- pm/common@21 -- # date +%s 00:04:13.801 11:16:14 -- pm/common@25 -- # sleep 1 00:04:13.801 11:16:14 -- pm/common@21 -- # date +%s 00:04:13.801 11:16:14 -- pm/common@21 -- # date +%s 00:04:13.801 11:16:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730542574 00:04:13.801 11:16:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730542574 00:04:13.801 11:16:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730542574 00:04:13.801 11:16:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730542574 00:04:13.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730542574_collect-cpu-load.pm.log 00:04:13.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730542574_collect-vmstat.pm.log 00:04:13.801 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730542574_collect-cpu-temp.pm.log 00:04:14.059 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730542574_collect-bmc-pm.bmc.pm.log 00:04:14.993 11:16:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:14.993 11:16:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:14.993 11:16:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.993 11:16:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.993 11:16:15 -- spdk/autotest.sh@59 -- # create_test_list 00:04:14.993 11:16:15 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:14.993 11:16:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.993 11:16:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:14.993 11:16:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.993 11:16:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.993 11:16:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:14.993 11:16:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.993 11:16:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:14.993 11:16:15 -- common/autotest_common.sh@1455 -- # uname 00:04:14.993 11:16:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:14.993 11:16:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:14.993 11:16:15 -- common/autotest_common.sh@1475 -- # uname 00:04:14.993 11:16:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:14.993 11:16:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:14.993 11:16:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:14.993 lcov: LCOV version 1.15 00:04:14.993 11:16:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:47.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:47.058 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:52.318 11:16:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:52.318 11:16:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.318 11:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:52.318 11:16:52 -- spdk/autotest.sh@78 -- # rm -f 00:04:52.318 11:16:52 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.691 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:53.691 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:53.691 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:53.691 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:53.691 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:53.691 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:53.691 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:53.691 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:53.691 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:53.691 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:53.691 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:53.691 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:53.691 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:53.691 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:53.691 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:53.691 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:53.691 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:53.691 11:16:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:53.691 11:16:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:53.691 11:16:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:53.691 11:16:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:53.691 11:16:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:53.691 11:16:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:53.691 11:16:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:53.691 11:16:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.691 11:16:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:53.691 11:16:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:53.691 11:16:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:53.691 11:16:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:53.691 11:16:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:53.691 11:16:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:53.691 11:16:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:53.691 No valid GPT data, bailing 00:04:53.691 11:16:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.691 11:16:54 -- scripts/common.sh@394 -- # pt= 00:04:53.691 11:16:54 -- scripts/common.sh@395 -- # return 1 00:04:53.691 11:16:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:53.691 1+0 records in 00:04:53.691 1+0 records out 00:04:53.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00203409 s, 516 MB/s 00:04:53.691 11:16:54 -- spdk/autotest.sh@105 -- # sync 00:04:53.691 11:16:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:53.691 11:16:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:53.691 11:16:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:55.592 11:16:55 -- spdk/autotest.sh@111 -- # uname -s 00:04:55.592 11:16:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:55.592 11:16:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:55.592 11:16:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:56.966 Hugepages 00:04:56.966 node hugesize free / total 00:04:56.966 node0 1048576kB 0 / 0 00:04:56.966 node0 2048kB 0 / 0 00:04:56.966 node1 1048576kB 0 / 0 00:04:56.966 node1 2048kB 0 / 0 00:04:56.966 00:04:56.966 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:56.966 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:56.966 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:56.966 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:56.966 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:56.966 11:16:57 -- spdk/autotest.sh@117 -- # uname -s 00:04:56.966 11:16:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:56.966 11:16:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:56.966 11:16:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.343 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:58.343 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:58.343 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:59.282 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.282 11:16:59 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:00.220 11:17:00 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:00.220 11:17:00 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:00.220 11:17:00 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.220 11:17:00 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:00.220 11:17:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:00.220 11:17:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:00.220 11:17:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.220 11:17:00 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.220 11:17:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:00.220 11:17:00 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:00.220 11:17:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:00.220 11:17:00 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.596 Waiting for block devices as requested 00:05:01.596 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:01.596 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:01.596 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:01.596 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:01.857 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:01.857 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:01.857 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:01.857 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:02.117 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:02.117 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:02.117 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:02.117 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:02.376 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:02.376 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:02.376 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:02.376 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:02.634 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:02.634 11:17:02 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:02.634 11:17:02 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:05:02.634 11:17:02 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:02.634 11:17:02 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:02.634 11:17:02 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:02.634 11:17:02 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:02.634 11:17:02 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:02.634 11:17:02 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:02.634 11:17:02 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:02.634 11:17:02 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:02.634 11:17:02 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:02.634 11:17:02 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:02.634 11:17:02 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:02.634 11:17:02 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:02.634 11:17:02 -- common/autotest_common.sh@1541 -- # continue 00:05:02.634 11:17:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:02.635 11:17:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.635 11:17:02 -- common/autotest_common.sh@10 -- # set +x 00:05:02.635 11:17:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:02.635 11:17:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.635 11:17:03 -- common/autotest_common.sh@10 -- # set +x 00:05:02.635 11:17:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.008 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.008 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:04.008 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:04.008 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:04.008 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:04.008 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:04.008 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:04.266 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:04.266 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:04.266 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:04.834 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:05.092 11:17:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:05.092 11:17:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.092 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.092 11:17:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:05.092 11:17:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:05.092 11:17:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.092 11:17:05 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:05.092 11:17:05 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:05.092 11:17:05 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:05.092 11:17:05 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:05.092 11:17:05 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:05.092 11:17:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:05.092 11:17:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:05.092 11:17:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.092 11:17:05 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:05.092 11:17:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:05.351 11:17:05 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:05.351 11:17:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:05.351 11:17:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:05.351 11:17:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:05.351 11:17:05 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:05.351 11:17:05 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:05.351 11:17:05 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:05.351 11:17:05 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:05.351 11:17:05 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:05.351 11:17:05 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:05.351 11:17:05 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3678069 00:05:05.351 11:17:05 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.351 11:17:05 -- common/autotest_common.sh@1583 -- # waitforlisten 3678069 00:05:05.351 11:17:05 -- common/autotest_common.sh@833 -- # '[' -z 3678069 ']' 00:05:05.351 11:17:05 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.351 11:17:05 -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.351 11:17:05 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.351 11:17:05 -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.351 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.351 [2024-11-02 11:17:05.568587] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:05.351 [2024-11-02 11:17:05.568696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678069 ] 00:05:05.351 [2024-11-02 11:17:05.641051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.351 [2024-11-02 11:17:05.689920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.609 11:17:05 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.609 11:17:05 -- common/autotest_common.sh@866 -- # return 0 00:05:05.609 11:17:05 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:05.609 11:17:05 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:05.609 11:17:05 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:08.893 nvme0n1 00:05:08.893 11:17:09 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:09.150 [2024-11-02 11:17:09.300411] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:09.150 [2024-11-02 11:17:09.300462] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:09.150 request: 00:05:09.150 { 00:05:09.150 "nvme_ctrlr_name": "nvme0", 00:05:09.150 "password": "test", 00:05:09.150 "method": "bdev_nvme_opal_revert", 00:05:09.150 "req_id": 1 00:05:09.150 } 00:05:09.150 Got JSON-RPC error response 00:05:09.150 response: 00:05:09.150 { 00:05:09.150 "code": -32603, 00:05:09.150 "message": "Internal error" 00:05:09.150 } 00:05:09.150 11:17:09 -- common/autotest_common.sh@1589 -- # true 00:05:09.150 11:17:09 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:09.150 11:17:09 -- common/autotest_common.sh@1593 -- # killprocess 3678069 00:05:09.150 11:17:09 -- common/autotest_common.sh@952 -- # '[' -z 3678069 ']' 00:05:09.150 11:17:09 -- common/autotest_common.sh@956 -- # kill -0 3678069 00:05:09.150 11:17:09 -- common/autotest_common.sh@957 -- # uname 00:05:09.150 11:17:09 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.150 11:17:09 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3678069 00:05:09.150 11:17:09 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.150 11:17:09 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.150 11:17:09 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3678069' 00:05:09.150 killing process with pid 3678069 00:05:09.150 11:17:09 -- common/autotest_common.sh@971 -- # kill 3678069 00:05:09.150 11:17:09 -- common/autotest_common.sh@976 -- # wait 3678069 00:05:11.047 11:17:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:11.047 11:17:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:11.047 11:17:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:11.047 11:17:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:11.047 11:17:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:11.047 11:17:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.047 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:05:11.047 11:17:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:11.047 11:17:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:11.047 11:17:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.047 11:17:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.047 11:17:11 -- common/autotest_common.sh@10 -- # set +x 00:05:11.047 ************************************ 00:05:11.047 START TEST env 00:05:11.047 ************************************ 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:11.047 * Looking for test storage... 00:05:11.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.047 11:17:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.047 11:17:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.047 11:17:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.047 11:17:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.047 11:17:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.047 11:17:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.047 11:17:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.047 11:17:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.047 11:17:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.047 11:17:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.047 11:17:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.047 11:17:11 env -- scripts/common.sh@344 -- # case "$op" in 00:05:11.047 11:17:11 env -- scripts/common.sh@345 -- # : 1 00:05:11.047 11:17:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.047 11:17:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.047 11:17:11 env -- scripts/common.sh@365 -- # decimal 1 00:05:11.047 11:17:11 env -- scripts/common.sh@353 -- # local d=1 00:05:11.047 11:17:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.047 11:17:11 env -- scripts/common.sh@355 -- # echo 1 00:05:11.047 11:17:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.047 11:17:11 env -- scripts/common.sh@366 -- # decimal 2 00:05:11.047 11:17:11 env -- scripts/common.sh@353 -- # local d=2 00:05:11.047 11:17:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.047 11:17:11 env -- scripts/common.sh@355 -- # echo 2 00:05:11.047 11:17:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.047 11:17:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.047 11:17:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.047 11:17:11 env -- scripts/common.sh@368 -- # return 0 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.047 --rc genhtml_branch_coverage=1 00:05:11.047 --rc genhtml_function_coverage=1 00:05:11.047 --rc genhtml_legend=1 00:05:11.047 --rc geninfo_all_blocks=1 00:05:11.047 --rc geninfo_unexecuted_blocks=1 00:05:11.047 00:05:11.047 ' 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.047 --rc genhtml_branch_coverage=1 00:05:11.047 --rc genhtml_function_coverage=1 00:05:11.047 --rc genhtml_legend=1 00:05:11.047 --rc geninfo_all_blocks=1 00:05:11.047 --rc geninfo_unexecuted_blocks=1 00:05:11.047 00:05:11.047 ' 00:05:11.047 11:17:11 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.047 --rc genhtml_branch_coverage=1 00:05:11.047 --rc genhtml_function_coverage=1 00:05:11.047 --rc genhtml_legend=1 00:05:11.047 --rc geninfo_all_blocks=1 00:05:11.047 --rc geninfo_unexecuted_blocks=1 00:05:11.047 00:05:11.048 ' 00:05:11.048 11:17:11 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.048 --rc genhtml_branch_coverage=1 00:05:11.048 --rc genhtml_function_coverage=1 00:05:11.048 --rc genhtml_legend=1 00:05:11.048 --rc geninfo_all_blocks=1 00:05:11.048 --rc geninfo_unexecuted_blocks=1 00:05:11.048 00:05:11.048 ' 00:05:11.048 11:17:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:11.048 11:17:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.048 11:17:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.048 11:17:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.048 ************************************ 00:05:11.048 START TEST env_memory 00:05:11.048 ************************************ 00:05:11.048 11:17:11 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:11.048 00:05:11.048 00:05:11.048 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.048 http://cunit.sourceforge.net/ 00:05:11.048 00:05:11.048 00:05:11.048 Suite: memory 00:05:11.048 Test: alloc and free memory map ...[2024-11-02 11:17:11.331063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:11.048 passed 00:05:11.048 Test: mem map translation ...[2024-11-02 11:17:11.352047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:11.048 [2024-11-02 11:17:11.352069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:11.048 [2024-11-02 11:17:11.352119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:11.048 [2024-11-02 11:17:11.352132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:11.048 passed 00:05:11.048 Test: mem map registration ...[2024-11-02 11:17:11.395451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:11.048 [2024-11-02 11:17:11.395471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:11.048 passed 00:05:11.310 Test: mem map adjacent registrations ...passed 00:05:11.310 00:05:11.310 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.310 suites 1 1 n/a 0 0 00:05:11.310 tests 4 4 4 0 0 00:05:11.310 asserts 152 152 152 0 n/a 00:05:11.310 00:05:11.310 Elapsed time = 0.148 seconds 00:05:11.310 00:05:11.310 real 0m0.156s 00:05:11.310 user 0m0.149s 00:05:11.310 sys 0m0.006s 00:05:11.310 11:17:11 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.310 11:17:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 ************************************ 00:05:11.310 END TEST env_memory 00:05:11.310 ************************************ 00:05:11.310 11:17:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.310 11:17:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.310 11:17:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.310 11:17:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.310 ************************************ 00:05:11.310 START TEST env_vtophys 00:05:11.310 ************************************ 00:05:11.310 11:17:11 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.310 EAL: lib.eal log level changed from notice to debug 00:05:11.310 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.310 EAL: Detected lcore 1 as core 1 on socket 0 00:05:11.310 EAL: Detected lcore 2 as core 2 on socket 0 00:05:11.310 EAL: Detected lcore 3 as core 3 on socket 0 00:05:11.310 EAL: Detected lcore 4 as core 4 on socket 0 00:05:11.310 EAL: Detected lcore 5 as core 5 on socket 0 00:05:11.310 EAL: Detected lcore 6 as core 8 on socket 0 00:05:11.310 EAL: Detected lcore 7 as core 9 on socket 0 00:05:11.310 EAL: Detected lcore 8 as core 10 on socket 0 00:05:11.310 EAL: Detected lcore 9 as core 11 on socket 0 00:05:11.310 EAL: Detected lcore 10 as core 12 on socket 0 00:05:11.310 EAL: Detected lcore 11 as core 13 on socket 0 00:05:11.310 EAL: Detected lcore 12 as core 0 on socket 1 00:05:11.310 EAL: Detected lcore 13 as core 1 on socket 1 00:05:11.310 EAL: Detected lcore 14 as core 2 on socket 1 00:05:11.310 EAL: Detected lcore 15 as core 3 on socket 1 00:05:11.310 EAL: Detected lcore 16 as core 4 on socket 1 00:05:11.310 EAL: Detected lcore 17 as core 5 on socket 1 00:05:11.310 EAL: Detected lcore 18 as core 8 on socket 1 00:05:11.310 EAL: Detected lcore 19 as core 9 on socket 1 00:05:11.310 EAL: Detected lcore 20 as core 10 on socket 1 00:05:11.310 EAL: Detected lcore 21 as core 11 on socket 1 00:05:11.310 EAL: Detected lcore 22 as core 12 on socket 1 00:05:11.310 EAL: Detected lcore 23 as core 13 on socket 1 00:05:11.310 EAL: Detected lcore 24 as core 0 on socket 0 00:05:11.310 EAL: Detected lcore 25 as core 1 on socket 0 00:05:11.310 EAL: Detected lcore 26 as core 2 on socket 0 00:05:11.310 EAL: Detected lcore 27 as core 3 on socket 0 00:05:11.310 EAL: Detected lcore 28 as core 4 on socket 0 00:05:11.310 EAL: Detected lcore 29 as core 5 on socket 0 00:05:11.310 EAL: Detected lcore 30 as core 8 on socket 0 00:05:11.310 EAL: Detected lcore 31 as core 9 on socket 0 00:05:11.310 EAL: Detected lcore 32 as core 10 on socket 0 00:05:11.310 EAL: Detected lcore 33 as core 11 on socket 0 00:05:11.310 EAL: Detected lcore 34 as core 12 on socket 0 00:05:11.310 EAL: Detected lcore 35 as core 13 on socket 0 00:05:11.310 EAL: Detected lcore 36 as core 0 on socket 1 00:05:11.310 EAL: Detected lcore 37 as core 1 on socket 1 00:05:11.310 EAL: Detected lcore 38 as core 2 on socket 1 00:05:11.310 EAL: Detected lcore 39 as core 3 on socket 1 00:05:11.310 EAL: Detected lcore 40 as core 4 on socket 1 00:05:11.310 EAL: Detected lcore 41 as core 5 on socket 1 00:05:11.310 EAL: Detected lcore 42 as core 8 on socket 1 00:05:11.310 EAL: Detected lcore 43 as core 9 on socket 1 00:05:11.310 EAL: Detected lcore 44 as core 10 on socket 1 00:05:11.310 EAL: Detected lcore 45 as core 11 on socket 1 00:05:11.310 EAL: Detected lcore 46 as core 12 on socket 1 00:05:11.310 EAL: Detected lcore 47 as core 13 on socket 1 00:05:11.310 EAL: Maximum logical cores by configuration: 128 00:05:11.310 EAL: Detected CPU lcores: 48 00:05:11.310 EAL: Detected NUMA nodes: 2 00:05:11.310 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:11.311 EAL: Detected shared linkage of DPDK 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:11.311 EAL: Registered [vdev] bus. 00:05:11.311 EAL: bus.vdev log level changed from disabled to notice 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:11.311 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:11.311 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:11.311 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:11.311 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Bus pci wants IOVA as 'DC' 00:05:11.311 EAL: Bus vdev wants IOVA as 'DC' 00:05:11.311 EAL: Buses did not request a specific IOVA mode. 00:05:11.311 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:11.311 EAL: Selected IOVA mode 'VA' 00:05:11.311 EAL: Probing VFIO support... 00:05:11.311 EAL: IOMMU type 1 (Type 1) is supported 00:05:11.311 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:11.311 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:11.311 EAL: VFIO support initialized 00:05:11.311 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.311 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.311 EAL: Setting up physically contiguous memory... 00:05:11.311 EAL: Setting maximum number of open files to 524288 00:05:11.311 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.311 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:11.311 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.311 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:11.311 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.311 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:11.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.311 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.311 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:11.311 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:11.311 EAL: Hugepages will be freed exactly as allocated. 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: TSC frequency is ~2700000 KHz 00:05:11.311 EAL: Main lcore 0 is ready (tid=7fe7f2319a00;cpuset=[0]) 00:05:11.311 EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 0 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.311 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.311 00:05:11.311 00:05:11.311 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.311 http://cunit.sourceforge.net/ 00:05:11.311 00:05:11.311 00:05:11.311 Suite: components_suite 00:05:11.311 Test: vtophys_malloc_test ...passed 00:05:11.311 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 4 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.311 EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 4 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.311 EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 4 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.311 EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 4 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.311 EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 4 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.311 EAL: Trying to obtain current memory policy. 00:05:11.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.311 EAL: Restoring previous memory policy: 4 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.311 EAL: request: mp_malloc_sync 00:05:11.311 EAL: No shared files mode enabled, IPC is disabled 00:05:11.311 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.312 EAL: Trying to obtain current memory policy. 00:05:11.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.312 EAL: Restoring previous memory policy: 4 00:05:11.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.312 EAL: request: mp_malloc_sync 00:05:11.312 EAL: No shared files mode enabled, IPC is disabled 00:05:11.312 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.609 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.609 EAL: request: mp_malloc_sync 00:05:11.609 EAL: No shared files mode enabled, IPC is disabled 00:05:11.609 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.609 EAL: Trying to obtain current memory policy. 00:05:11.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.609 EAL: Restoring previous memory policy: 4 00:05:11.609 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.609 EAL: request: mp_malloc_sync 00:05:11.609 EAL: No shared files mode enabled, IPC is disabled 00:05:11.609 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.609 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.609 EAL: request: mp_malloc_sync 00:05:11.609 EAL: No shared files mode enabled, IPC is disabled 00:05:11.609 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.609 EAL: Trying to obtain current memory policy. 00:05:11.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.867 EAL: Restoring previous memory policy: 4 00:05:11.867 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.867 EAL: request: mp_malloc_sync 00:05:11.867 EAL: No shared files mode enabled, IPC is disabled 00:05:11.867 EAL: Heap on socket 0 was expanded by 514MB 00:05:11.867 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.125 EAL: request: mp_malloc_sync 00:05:12.125 EAL: No shared files mode enabled, IPC is disabled 00:05:12.125 EAL: Heap on socket 0 was shrunk by 514MB 00:05:12.125 EAL: Trying to obtain current memory policy. 00:05:12.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.384 EAL: Restoring previous memory policy: 4 00:05:12.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.384 EAL: request: mp_malloc_sync 00:05:12.384 EAL: No shared files mode enabled, IPC is disabled 00:05:12.384 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.641 EAL: request: mp_malloc_sync 00:05:12.641 EAL: No shared files mode enabled, IPC is disabled 00:05:12.641 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.641 passed 00:05:12.641 00:05:12.641 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.641 suites 1 1 n/a 0 0 00:05:12.641 tests 2 2 2 0 0 00:05:12.641 asserts 497 497 497 0 n/a 00:05:12.641 00:05:12.641 Elapsed time = 1.377 seconds 00:05:12.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.641 EAL: request: mp_malloc_sync 00:05:12.641 EAL: No shared files mode enabled, IPC is disabled 00:05:12.641 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.641 EAL: No shared files mode enabled, IPC is disabled 00:05:12.641 EAL: No shared files mode enabled, IPC is disabled 00:05:12.641 EAL: No shared files mode enabled, IPC is disabled 00:05:12.641 00:05:12.641 real 0m1.503s 00:05:12.641 user 0m0.861s 00:05:12.641 sys 0m0.606s 00:05:12.641 11:17:13 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.641 11:17:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.641 ************************************ 00:05:12.641 END TEST env_vtophys 00:05:12.641 ************************************ 00:05:12.641 11:17:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.641 11:17:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.641 11:17:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.641 11:17:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.900 ************************************ 00:05:12.900 START TEST env_pci 00:05:12.900 ************************************ 00:05:12.900 11:17:13 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.900 00:05:12.900 00:05:12.900 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.900 http://cunit.sourceforge.net/ 00:05:12.900 00:05:12.900 00:05:12.900 Suite: pci 00:05:12.900 Test: pci_hook ...[2024-11-02 11:17:13.059865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3678971 has claimed it 00:05:12.900 EAL: Cannot find device (10000:00:01.0) 00:05:12.900 EAL: Failed to attach device on primary process 00:05:12.900 passed 00:05:12.900 00:05:12.900 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.900 suites 1 1 n/a 0 0 00:05:12.900 tests 1 1 1 0 0 00:05:12.900 asserts 25 25 25 0 n/a 00:05:12.900 00:05:12.900 Elapsed time = 0.021 seconds 00:05:12.900 00:05:12.900 real 0m0.034s 00:05:12.900 user 0m0.009s 00:05:12.900 sys 0m0.025s 00:05:12.900 11:17:13 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.900 11:17:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.900 ************************************ 00:05:12.900 END TEST env_pci 00:05:12.900 ************************************ 00:05:12.900 11:17:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.900 11:17:13 env -- env/env.sh@15 -- # uname 00:05:12.900 11:17:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.900 11:17:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.900 11:17:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.900 11:17:13 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:12.900 11:17:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.900 11:17:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.900 ************************************ 00:05:12.900 START TEST env_dpdk_post_init 00:05:12.900 ************************************ 00:05:12.900 11:17:13 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.900 EAL: Detected CPU lcores: 48 00:05:12.900 EAL: Detected NUMA nodes: 2 00:05:12.900 EAL: Detected shared linkage of DPDK 00:05:12.900 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.900 EAL: Selected IOVA mode 'VA' 00:05:12.900 EAL: VFIO support initialized 00:05:12.900 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.900 EAL: Using IOMMU type 1 (Type 1) 00:05:12.900 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:12.900 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:12.900 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:12.900 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:13.159 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:14.095 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:17.378 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:17.378 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:17.378 Starting DPDK initialization... 00:05:17.378 Starting SPDK post initialization... 00:05:17.378 SPDK NVMe probe 00:05:17.378 Attaching to 0000:88:00.0 00:05:17.378 Attached to 0000:88:00.0 00:05:17.378 Cleaning up... 00:05:17.378 00:05:17.378 real 0m4.415s 00:05:17.378 user 0m3.288s 00:05:17.378 sys 0m0.177s 00:05:17.378 11:17:17 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.378 11:17:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.378 ************************************ 00:05:17.378 END TEST env_dpdk_post_init 00:05:17.378 ************************************ 00:05:17.378 11:17:17 env -- env/env.sh@26 -- # uname 00:05:17.378 11:17:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:17.378 11:17:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.378 11:17:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.378 11:17:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.378 11:17:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.378 ************************************ 00:05:17.378 START TEST env_mem_callbacks 00:05:17.378 ************************************ 00:05:17.378 11:17:17 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.378 EAL: Detected CPU lcores: 48 00:05:17.378 EAL: Detected NUMA nodes: 2 00:05:17.378 EAL: Detected shared linkage of DPDK 00:05:17.378 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.378 EAL: Selected IOVA mode 'VA' 00:05:17.378 EAL: VFIO support initialized 00:05:17.378 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.378 00:05:17.378 00:05:17.378 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.378 http://cunit.sourceforge.net/ 00:05:17.378 00:05:17.378 00:05:17.378 Suite: memory 00:05:17.378 Test: test ... 00:05:17.378 register 0x200000200000 2097152 00:05:17.378 malloc 3145728 00:05:17.378 register 0x200000400000 4194304 00:05:17.378 buf 0x200000500000 len 3145728 PASSED 00:05:17.378 malloc 64 00:05:17.378 buf 0x2000004fff40 len 64 PASSED 00:05:17.378 malloc 4194304 00:05:17.378 register 0x200000800000 6291456 00:05:17.378 buf 0x200000a00000 len 4194304 PASSED 00:05:17.378 free 0x200000500000 3145728 00:05:17.378 free 0x2000004fff40 64 00:05:17.378 unregister 0x200000400000 4194304 PASSED 00:05:17.378 free 0x200000a00000 4194304 00:05:17.378 unregister 0x200000800000 6291456 PASSED 00:05:17.378 malloc 8388608 00:05:17.378 register 0x200000400000 10485760 00:05:17.378 buf 0x200000600000 len 8388608 PASSED 00:05:17.378 free 0x200000600000 8388608 00:05:17.378 unregister 0x200000400000 10485760 PASSED 00:05:17.378 passed 00:05:17.378 00:05:17.378 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.378 suites 1 1 n/a 0 0 00:05:17.378 tests 1 1 1 0 0 00:05:17.378 asserts 15 15 15 0 n/a 00:05:17.378 00:05:17.378 Elapsed time = 0.005 seconds 00:05:17.378 00:05:17.378 real 0m0.048s 00:05:17.378 user 0m0.013s 00:05:17.378 sys 0m0.035s 00:05:17.378 11:17:17 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.378 11:17:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:17.378 ************************************ 00:05:17.378 END TEST env_mem_callbacks 00:05:17.378 ************************************ 00:05:17.378 00:05:17.378 real 0m6.527s 00:05:17.378 user 0m4.499s 00:05:17.378 sys 0m1.062s 00:05:17.378 11:17:17 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.378 11:17:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.378 ************************************ 00:05:17.378 END TEST env 00:05:17.378 ************************************ 00:05:17.378 11:17:17 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.378 11:17:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.378 11:17:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.378 11:17:17 -- common/autotest_common.sh@10 -- # set +x 00:05:17.378 ************************************ 00:05:17.378 START TEST rpc 00:05:17.378 ************************************ 00:05:17.378 11:17:17 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.378 * Looking for test storage... 00:05:17.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.378 11:17:17 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.378 11:17:17 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.378 11:17:17 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.637 11:17:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.637 11:17:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.637 11:17:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.637 11:17:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.637 11:17:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.637 11:17:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.637 11:17:17 rpc -- scripts/common.sh@345 -- # : 1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.637 11:17:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.637 11:17:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.637 11:17:17 rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.637 11:17:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.637 11:17:17 rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.637 11:17:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.637 11:17:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.637 11:17:17 rpc -- scripts/common.sh@368 -- # return 0 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.637 --rc genhtml_branch_coverage=1 00:05:17.637 --rc genhtml_function_coverage=1 00:05:17.637 --rc genhtml_legend=1 00:05:17.637 --rc geninfo_all_blocks=1 00:05:17.637 --rc geninfo_unexecuted_blocks=1 00:05:17.637 00:05:17.637 ' 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.637 --rc genhtml_branch_coverage=1 00:05:17.637 --rc genhtml_function_coverage=1 00:05:17.637 --rc genhtml_legend=1 00:05:17.637 --rc geninfo_all_blocks=1 00:05:17.637 --rc geninfo_unexecuted_blocks=1 00:05:17.637 00:05:17.637 ' 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.637 --rc genhtml_branch_coverage=1 00:05:17.637 --rc genhtml_function_coverage=1 00:05:17.637 --rc genhtml_legend=1 00:05:17.637 --rc geninfo_all_blocks=1 00:05:17.637 --rc geninfo_unexecuted_blocks=1 00:05:17.637 00:05:17.637 ' 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.637 --rc genhtml_branch_coverage=1 00:05:17.637 --rc genhtml_function_coverage=1 00:05:17.637 --rc genhtml_legend=1 00:05:17.637 --rc geninfo_all_blocks=1 00:05:17.637 --rc geninfo_unexecuted_blocks=1 00:05:17.637 00:05:17.637 ' 00:05:17.637 11:17:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3679752 00:05:17.637 11:17:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:17.637 11:17:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.637 11:17:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3679752 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@833 -- # '[' -z 3679752 ']' 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.637 11:17:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.637 [2024-11-02 11:17:17.920871] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:17.637 [2024-11-02 11:17:17.920970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679752 ] 00:05:17.637 [2024-11-02 11:17:17.986980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.637 [2024-11-02 11:17:18.035150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.637 [2024-11-02 11:17:18.035204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3679752' to capture a snapshot of events at runtime. 00:05:17.637 [2024-11-02 11:17:18.035235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.637 [2024-11-02 11:17:18.035246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.637 [2024-11-02 11:17:18.035265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3679752 for offline analysis/debug. 00:05:17.637 [2024-11-02 11:17:18.035942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.203 11:17:18 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.204 11:17:18 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.204 11:17:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.204 11:17:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.204 11:17:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.204 11:17:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.204 11:17:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.204 11:17:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.204 11:17:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 ************************************ 00:05:18.204 START TEST rpc_integrity 00:05:18.204 ************************************ 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.204 { 00:05:18.204 "name": "Malloc0", 00:05:18.204 "aliases": [ 00:05:18.204 "c48512f3-99c9-47d3-80bc-50f59c55f451" 00:05:18.204 ], 00:05:18.204 "product_name": "Malloc disk", 00:05:18.204 "block_size": 512, 00:05:18.204 "num_blocks": 16384, 00:05:18.204 "uuid": "c48512f3-99c9-47d3-80bc-50f59c55f451", 00:05:18.204 "assigned_rate_limits": { 00:05:18.204 "rw_ios_per_sec": 0, 00:05:18.204 "rw_mbytes_per_sec": 0, 00:05:18.204 "r_mbytes_per_sec": 0, 00:05:18.204 "w_mbytes_per_sec": 0 00:05:18.204 }, 00:05:18.204 "claimed": false, 00:05:18.204 "zoned": false, 00:05:18.204 "supported_io_types": { 00:05:18.204 "read": true, 00:05:18.204 "write": true, 00:05:18.204 "unmap": true, 00:05:18.204 "flush": true, 00:05:18.204 "reset": true, 00:05:18.204 "nvme_admin": false, 00:05:18.204 "nvme_io": false, 00:05:18.204 "nvme_io_md": false, 00:05:18.204 "write_zeroes": true, 00:05:18.204 "zcopy": true, 00:05:18.204 "get_zone_info": false, 00:05:18.204 "zone_management": false, 00:05:18.204 "zone_append": false, 00:05:18.204 "compare": false, 00:05:18.204 "compare_and_write": false, 00:05:18.204 "abort": true, 00:05:18.204 "seek_hole": false, 00:05:18.204 "seek_data": false, 00:05:18.204 "copy": true, 00:05:18.204 "nvme_iov_md": false 00:05:18.204 }, 00:05:18.204 "memory_domains": [ 00:05:18.204 { 00:05:18.204 "dma_device_id": "system", 00:05:18.204 "dma_device_type": 1 00:05:18.204 }, 00:05:18.204 { 00:05:18.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.204 "dma_device_type": 2 00:05:18.204 } 00:05:18.204 ], 00:05:18.204 "driver_specific": {} 00:05:18.204 } 00:05:18.204 ]' 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 [2024-11-02 11:17:18.445730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:18.204 [2024-11-02 11:17:18.445775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.204 [2024-11-02 11:17:18.445799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x110fb80 00:05:18.204 [2024-11-02 11:17:18.445815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.204 [2024-11-02 11:17:18.447341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.204 [2024-11-02 11:17:18.447369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.204 Passthru0 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.204 { 00:05:18.204 "name": "Malloc0", 00:05:18.204 "aliases": [ 00:05:18.204 "c48512f3-99c9-47d3-80bc-50f59c55f451" 00:05:18.204 ], 00:05:18.204 "product_name": "Malloc disk", 00:05:18.204 "block_size": 512, 00:05:18.204 "num_blocks": 16384, 00:05:18.204 "uuid": "c48512f3-99c9-47d3-80bc-50f59c55f451", 00:05:18.204 "assigned_rate_limits": { 00:05:18.204 "rw_ios_per_sec": 0, 00:05:18.204 "rw_mbytes_per_sec": 0, 00:05:18.204 "r_mbytes_per_sec": 0, 00:05:18.204 "w_mbytes_per_sec": 0 00:05:18.204 }, 00:05:18.204 "claimed": true, 00:05:18.204 "claim_type": "exclusive_write", 00:05:18.204 "zoned": false, 00:05:18.204 "supported_io_types": { 00:05:18.204 "read": true, 00:05:18.204 "write": true, 00:05:18.204 "unmap": true, 00:05:18.204 "flush": true, 00:05:18.204 "reset": true, 00:05:18.204 "nvme_admin": false, 00:05:18.204 "nvme_io": false, 00:05:18.204 "nvme_io_md": false, 00:05:18.204 "write_zeroes": true, 00:05:18.204 "zcopy": true, 00:05:18.204 "get_zone_info": false, 00:05:18.204 "zone_management": false, 00:05:18.204 "zone_append": false, 00:05:18.204 "compare": false, 00:05:18.204 "compare_and_write": false, 00:05:18.204 "abort": true, 00:05:18.204 "seek_hole": false, 00:05:18.204 "seek_data": false, 00:05:18.204 "copy": true, 00:05:18.204 "nvme_iov_md": false 00:05:18.204 }, 00:05:18.204 "memory_domains": [ 00:05:18.204 { 00:05:18.204 "dma_device_id": "system", 00:05:18.204 "dma_device_type": 1 00:05:18.204 }, 00:05:18.204 { 00:05:18.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.204 "dma_device_type": 2 00:05:18.204 } 00:05:18.204 ], 00:05:18.204 "driver_specific": {} 00:05:18.204 }, 00:05:18.204 { 00:05:18.204 "name": "Passthru0", 00:05:18.204 "aliases": [ 00:05:18.204 "fedb7568-83b5-5345-8ef3-a1b3e9e1c127" 00:05:18.204 ], 00:05:18.204 "product_name": "passthru", 00:05:18.204 "block_size": 512, 00:05:18.204 "num_blocks": 16384, 00:05:18.204 "uuid": "fedb7568-83b5-5345-8ef3-a1b3e9e1c127", 00:05:18.204 "assigned_rate_limits": { 00:05:18.204 "rw_ios_per_sec": 0, 00:05:18.204 "rw_mbytes_per_sec": 0, 00:05:18.204 "r_mbytes_per_sec": 0, 00:05:18.204 "w_mbytes_per_sec": 0 00:05:18.204 }, 00:05:18.204 "claimed": false, 00:05:18.204 "zoned": false, 00:05:18.204 "supported_io_types": { 00:05:18.204 "read": true, 00:05:18.204 "write": true, 00:05:18.204 "unmap": true, 00:05:18.204 "flush": true, 00:05:18.204 "reset": true, 00:05:18.204 "nvme_admin": false, 00:05:18.204 "nvme_io": false, 00:05:18.204 "nvme_io_md": false, 00:05:18.204 "write_zeroes": true, 00:05:18.204 "zcopy": true, 00:05:18.204 "get_zone_info": false, 00:05:18.204 "zone_management": false, 00:05:18.204 "zone_append": false, 00:05:18.204 "compare": false, 00:05:18.204 "compare_and_write": false, 00:05:18.204 "abort": true, 00:05:18.204 "seek_hole": false, 00:05:18.204 "seek_data": false, 00:05:18.204 "copy": true, 00:05:18.204 "nvme_iov_md": false 00:05:18.204 }, 00:05:18.204 "memory_domains": [ 00:05:18.204 { 00:05:18.204 "dma_device_id": "system", 00:05:18.204 "dma_device_type": 1 00:05:18.204 }, 00:05:18.204 { 00:05:18.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.204 "dma_device_type": 2 00:05:18.204 } 00:05:18.204 ], 00:05:18.204 "driver_specific": { 00:05:18.204 "passthru": { 00:05:18.204 "name": "Passthru0", 00:05:18.204 "base_bdev_name": "Malloc0" 00:05:18.204 } 00:05:18.204 } 00:05:18.204 } 00:05:18.204 ]' 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.204 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.205 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.205 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.205 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.205 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.205 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.205 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.205 11:17:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.205 00:05:18.205 real 0m0.234s 00:05:18.205 user 0m0.159s 00:05:18.205 sys 0m0.015s 00:05:18.205 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.205 11:17:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.205 ************************************ 00:05:18.205 END TEST rpc_integrity 00:05:18.205 ************************************ 00:05:18.205 11:17:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:18.205 11:17:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.205 11:17:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.205 11:17:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 ************************************ 00:05:18.463 START TEST rpc_plugins 00:05:18.463 ************************************ 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:18.463 { 00:05:18.463 "name": "Malloc1", 00:05:18.463 "aliases": [ 00:05:18.463 "c2ef870e-a87b-474a-8fbb-4196de7e2455" 00:05:18.463 ], 00:05:18.463 "product_name": "Malloc disk", 00:05:18.463 "block_size": 4096, 00:05:18.463 "num_blocks": 256, 00:05:18.463 "uuid": "c2ef870e-a87b-474a-8fbb-4196de7e2455", 00:05:18.463 "assigned_rate_limits": { 00:05:18.463 "rw_ios_per_sec": 0, 00:05:18.463 "rw_mbytes_per_sec": 0, 00:05:18.463 "r_mbytes_per_sec": 0, 00:05:18.463 "w_mbytes_per_sec": 0 00:05:18.463 }, 00:05:18.463 "claimed": false, 00:05:18.463 "zoned": false, 00:05:18.463 "supported_io_types": { 00:05:18.463 "read": true, 00:05:18.463 "write": true, 00:05:18.463 "unmap": true, 00:05:18.463 "flush": true, 00:05:18.463 "reset": true, 00:05:18.463 "nvme_admin": false, 00:05:18.463 "nvme_io": false, 00:05:18.463 "nvme_io_md": false, 00:05:18.463 "write_zeroes": true, 00:05:18.463 "zcopy": true, 00:05:18.463 "get_zone_info": false, 00:05:18.463 "zone_management": false, 00:05:18.463 "zone_append": false, 00:05:18.463 "compare": false, 00:05:18.463 "compare_and_write": false, 00:05:18.463 "abort": true, 00:05:18.463 "seek_hole": false, 00:05:18.463 "seek_data": false, 00:05:18.463 "copy": true, 00:05:18.463 "nvme_iov_md": false 00:05:18.463 }, 00:05:18.463 "memory_domains": [ 00:05:18.463 { 00:05:18.463 "dma_device_id": "system", 00:05:18.463 "dma_device_type": 1 00:05:18.463 }, 00:05:18.463 { 00:05:18.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.463 "dma_device_type": 2 00:05:18.463 } 00:05:18.463 ], 00:05:18.463 "driver_specific": {} 00:05:18.463 } 00:05:18.463 ]' 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:18.463 11:17:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.463 00:05:18.463 real 0m0.113s 00:05:18.463 user 0m0.067s 00:05:18.463 sys 0m0.014s 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.463 11:17:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 ************************************ 00:05:18.463 END TEST rpc_plugins 00:05:18.463 ************************************ 00:05:18.463 11:17:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.463 11:17:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.463 11:17:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.463 11:17:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 ************************************ 00:05:18.463 START TEST rpc_trace_cmd_test 00:05:18.463 ************************************ 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:18.463 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3679752", 00:05:18.463 "tpoint_group_mask": "0x8", 00:05:18.463 "iscsi_conn": { 00:05:18.463 "mask": "0x2", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "scsi": { 00:05:18.463 "mask": "0x4", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "bdev": { 00:05:18.463 "mask": "0x8", 00:05:18.463 "tpoint_mask": "0xffffffffffffffff" 00:05:18.463 }, 00:05:18.463 "nvmf_rdma": { 00:05:18.463 "mask": "0x10", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "nvmf_tcp": { 00:05:18.463 "mask": "0x20", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "ftl": { 00:05:18.463 "mask": "0x40", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "blobfs": { 00:05:18.463 "mask": "0x80", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "dsa": { 00:05:18.463 "mask": "0x200", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "thread": { 00:05:18.463 "mask": "0x400", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "nvme_pcie": { 00:05:18.463 "mask": "0x800", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "iaa": { 00:05:18.463 "mask": "0x1000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "nvme_tcp": { 00:05:18.463 "mask": "0x2000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "bdev_nvme": { 00:05:18.463 "mask": "0x4000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "sock": { 00:05:18.463 "mask": "0x8000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "blob": { 00:05:18.463 "mask": "0x10000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "bdev_raid": { 00:05:18.463 "mask": "0x20000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 }, 00:05:18.463 "scheduler": { 00:05:18.463 "mask": "0x40000", 00:05:18.463 "tpoint_mask": "0x0" 00:05:18.463 } 00:05:18.463 }' 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:18.463 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:18.464 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.722 00:05:18.722 real 0m0.199s 00:05:18.722 user 0m0.173s 00:05:18.722 sys 0m0.016s 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.722 11:17:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.722 ************************************ 00:05:18.722 END TEST rpc_trace_cmd_test 00:05:18.722 ************************************ 00:05:18.722 11:17:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:18.722 11:17:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.722 11:17:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.722 11:17:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.722 11:17:19 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.722 11:17:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.722 ************************************ 00:05:18.722 START TEST rpc_daemon_integrity 00:05:18.722 ************************************ 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.722 { 00:05:18.722 "name": "Malloc2", 00:05:18.722 "aliases": [ 00:05:18.722 "122199b0-2165-4d33-8067-067de2d58aee" 00:05:18.722 ], 00:05:18.722 "product_name": "Malloc disk", 00:05:18.722 "block_size": 512, 00:05:18.722 "num_blocks": 16384, 00:05:18.722 "uuid": "122199b0-2165-4d33-8067-067de2d58aee", 00:05:18.722 "assigned_rate_limits": { 00:05:18.722 "rw_ios_per_sec": 0, 00:05:18.722 "rw_mbytes_per_sec": 0, 00:05:18.722 "r_mbytes_per_sec": 0, 00:05:18.722 "w_mbytes_per_sec": 0 00:05:18.722 }, 00:05:18.722 "claimed": false, 00:05:18.722 "zoned": false, 00:05:18.722 "supported_io_types": { 00:05:18.722 "read": true, 00:05:18.722 "write": true, 00:05:18.722 "unmap": true, 00:05:18.722 "flush": true, 00:05:18.722 "reset": true, 00:05:18.722 "nvme_admin": false, 00:05:18.722 "nvme_io": false, 00:05:18.722 "nvme_io_md": false, 00:05:18.722 "write_zeroes": true, 00:05:18.722 "zcopy": true, 00:05:18.722 "get_zone_info": false, 00:05:18.722 "zone_management": false, 00:05:18.722 "zone_append": false, 00:05:18.722 "compare": false, 00:05:18.722 "compare_and_write": false, 00:05:18.722 "abort": true, 00:05:18.722 "seek_hole": false, 00:05:18.722 "seek_data": false, 00:05:18.722 "copy": true, 00:05:18.722 "nvme_iov_md": false 00:05:18.722 }, 00:05:18.722 "memory_domains": [ 00:05:18.722 { 00:05:18.722 "dma_device_id": "system", 00:05:18.722 "dma_device_type": 1 00:05:18.722 }, 00:05:18.722 { 00:05:18.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.722 "dma_device_type": 2 00:05:18.722 } 00:05:18.722 ], 00:05:18.722 "driver_specific": {} 00:05:18.722 } 00:05:18.722 ]' 00:05:18.722 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 [2024-11-02 11:17:19.136053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.981 [2024-11-02 11:17:19.136101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.981 [2024-11-02 11:17:19.136140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1113890 00:05:18.981 [2024-11-02 11:17:19.136159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.981 [2024-11-02 11:17:19.137536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.981 [2024-11-02 11:17:19.137563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.981 Passthru0 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.981 { 00:05:18.981 "name": "Malloc2", 00:05:18.981 "aliases": [ 00:05:18.981 "122199b0-2165-4d33-8067-067de2d58aee" 00:05:18.981 ], 00:05:18.981 "product_name": "Malloc disk", 00:05:18.981 "block_size": 512, 00:05:18.981 "num_blocks": 16384, 00:05:18.981 "uuid": "122199b0-2165-4d33-8067-067de2d58aee", 00:05:18.981 "assigned_rate_limits": { 00:05:18.981 "rw_ios_per_sec": 0, 00:05:18.981 "rw_mbytes_per_sec": 0, 00:05:18.981 "r_mbytes_per_sec": 0, 00:05:18.981 "w_mbytes_per_sec": 0 00:05:18.981 }, 00:05:18.981 "claimed": true, 00:05:18.981 "claim_type": "exclusive_write", 00:05:18.981 "zoned": false, 00:05:18.981 "supported_io_types": { 00:05:18.981 "read": true, 00:05:18.981 "write": true, 00:05:18.981 "unmap": true, 00:05:18.981 "flush": true, 00:05:18.981 "reset": true, 00:05:18.981 "nvme_admin": false, 00:05:18.981 "nvme_io": false, 00:05:18.981 "nvme_io_md": false, 00:05:18.981 "write_zeroes": true, 00:05:18.981 "zcopy": true, 00:05:18.981 "get_zone_info": false, 00:05:18.981 "zone_management": false, 00:05:18.981 "zone_append": false, 00:05:18.981 "compare": false, 00:05:18.981 "compare_and_write": false, 00:05:18.981 "abort": true, 00:05:18.981 "seek_hole": false, 00:05:18.981 "seek_data": false, 00:05:18.981 "copy": true, 00:05:18.981 "nvme_iov_md": false 00:05:18.981 }, 00:05:18.981 "memory_domains": [ 00:05:18.981 { 00:05:18.981 "dma_device_id": "system", 00:05:18.981 "dma_device_type": 1 00:05:18.981 }, 00:05:18.981 { 00:05:18.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.981 "dma_device_type": 2 00:05:18.981 } 00:05:18.981 ], 00:05:18.981 "driver_specific": {} 00:05:18.981 }, 00:05:18.981 { 00:05:18.981 "name": "Passthru0", 00:05:18.981 "aliases": [ 00:05:18.981 "115c1e0b-4831-5fa4-8db3-a2fd1f8a7aad" 00:05:18.981 ], 00:05:18.981 "product_name": "passthru", 00:05:18.981 "block_size": 512, 00:05:18.981 "num_blocks": 16384, 00:05:18.981 "uuid": "115c1e0b-4831-5fa4-8db3-a2fd1f8a7aad", 00:05:18.981 "assigned_rate_limits": { 00:05:18.981 "rw_ios_per_sec": 0, 00:05:18.981 "rw_mbytes_per_sec": 0, 00:05:18.981 "r_mbytes_per_sec": 0, 00:05:18.981 "w_mbytes_per_sec": 0 00:05:18.981 }, 00:05:18.981 "claimed": false, 00:05:18.981 "zoned": false, 00:05:18.981 "supported_io_types": { 00:05:18.981 "read": true, 00:05:18.981 "write": true, 00:05:18.981 "unmap": true, 00:05:18.981 "flush": true, 00:05:18.981 "reset": true, 00:05:18.981 "nvme_admin": false, 00:05:18.981 "nvme_io": false, 00:05:18.981 "nvme_io_md": false, 00:05:18.981 "write_zeroes": true, 00:05:18.981 "zcopy": true, 00:05:18.981 "get_zone_info": false, 00:05:18.981 "zone_management": false, 00:05:18.981 "zone_append": false, 00:05:18.981 "compare": false, 00:05:18.981 "compare_and_write": false, 00:05:18.981 "abort": true, 00:05:18.981 "seek_hole": false, 00:05:18.981 "seek_data": false, 00:05:18.981 "copy": true, 00:05:18.981 "nvme_iov_md": false 00:05:18.981 }, 00:05:18.981 "memory_domains": [ 00:05:18.981 { 00:05:18.981 "dma_device_id": "system", 00:05:18.981 "dma_device_type": 1 00:05:18.981 }, 00:05:18.981 { 00:05:18.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.981 "dma_device_type": 2 00:05:18.981 } 00:05:18.981 ], 00:05:18.981 "driver_specific": { 00:05:18.981 "passthru": { 00:05:18.981 "name": "Passthru0", 00:05:18.981 "base_bdev_name": "Malloc2" 00:05:18.981 } 00:05:18.981 } 00:05:18.981 } 00:05:18.981 ]' 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.981 00:05:18.981 real 0m0.230s 00:05:18.981 user 0m0.153s 00:05:18.981 sys 0m0.022s 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.981 11:17:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.981 ************************************ 00:05:18.981 END TEST rpc_daemon_integrity 00:05:18.981 ************************************ 00:05:18.981 11:17:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.981 11:17:19 rpc -- rpc/rpc.sh@84 -- # killprocess 3679752 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@952 -- # '[' -z 3679752 ']' 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@956 -- # kill -0 3679752 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@957 -- # uname 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3679752 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.981 11:17:19 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.982 11:17:19 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3679752' 00:05:18.982 killing process with pid 3679752 00:05:18.982 11:17:19 rpc -- common/autotest_common.sh@971 -- # kill 3679752 00:05:18.982 11:17:19 rpc -- common/autotest_common.sh@976 -- # wait 3679752 00:05:19.548 00:05:19.548 real 0m1.988s 00:05:19.548 user 0m2.468s 00:05:19.548 sys 0m0.634s 00:05:19.548 11:17:19 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.548 11:17:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.548 ************************************ 00:05:19.548 END TEST rpc 00:05:19.548 ************************************ 00:05:19.548 11:17:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.548 11:17:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.548 11:17:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.548 11:17:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.548 ************************************ 00:05:19.548 START TEST skip_rpc 00:05:19.548 ************************************ 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.548 * Looking for test storage... 00:05:19.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.548 11:17:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.548 --rc genhtml_branch_coverage=1 00:05:19.548 --rc genhtml_function_coverage=1 00:05:19.548 --rc genhtml_legend=1 00:05:19.548 --rc geninfo_all_blocks=1 00:05:19.548 --rc geninfo_unexecuted_blocks=1 00:05:19.548 00:05:19.548 ' 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.548 --rc genhtml_branch_coverage=1 00:05:19.548 --rc genhtml_function_coverage=1 00:05:19.548 --rc genhtml_legend=1 00:05:19.548 --rc geninfo_all_blocks=1 00:05:19.548 --rc geninfo_unexecuted_blocks=1 00:05:19.548 00:05:19.548 ' 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.548 --rc genhtml_branch_coverage=1 00:05:19.548 --rc genhtml_function_coverage=1 00:05:19.548 --rc genhtml_legend=1 00:05:19.548 --rc geninfo_all_blocks=1 00:05:19.548 --rc geninfo_unexecuted_blocks=1 00:05:19.548 00:05:19.548 ' 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.548 --rc genhtml_branch_coverage=1 00:05:19.548 --rc genhtml_function_coverage=1 00:05:19.548 --rc genhtml_legend=1 00:05:19.548 --rc geninfo_all_blocks=1 00:05:19.548 --rc geninfo_unexecuted_blocks=1 00:05:19.548 00:05:19.548 ' 00:05:19.548 11:17:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.548 11:17:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.548 11:17:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.548 11:17:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.548 ************************************ 00:05:19.548 START TEST skip_rpc 00:05:19.548 ************************************ 00:05:19.548 11:17:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:19.548 11:17:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3680098 00:05:19.548 11:17:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:19.548 11:17:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.548 11:17:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:19.807 [2024-11-02 11:17:19.973967] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:19.807 [2024-11-02 11:17:19.974047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680098 ] 00:05:19.807 [2024-11-02 11:17:20.069779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.807 [2024-11-02 11:17:20.121647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3680098 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3680098 ']' 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3680098 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3680098 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3680098' 00:05:25.067 killing process with pid 3680098 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3680098 00:05:25.067 11:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3680098 00:05:25.067 00:05:25.067 real 0m5.436s 00:05:25.067 user 0m5.084s 00:05:25.067 sys 0m0.360s 00:05:25.067 11:17:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.067 11:17:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.067 ************************************ 00:05:25.067 END TEST skip_rpc 00:05:25.067 ************************************ 00:05:25.067 11:17:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.067 11:17:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.067 11:17:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.067 11:17:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.067 ************************************ 00:05:25.067 START TEST skip_rpc_with_json 00:05:25.067 ************************************ 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3680778 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3680778 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3680778 ']' 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.067 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.067 [2024-11-02 11:17:25.454120] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:25.067 [2024-11-02 11:17:25.454216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680778 ] 00:05:25.325 [2024-11-02 11:17:25.526992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.325 [2024-11-02 11:17:25.574403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.583 [2024-11-02 11:17:25.854146] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:25.583 request: 00:05:25.583 { 00:05:25.583 "trtype": "tcp", 00:05:25.583 "method": "nvmf_get_transports", 00:05:25.583 "req_id": 1 00:05:25.583 } 00:05:25.583 Got JSON-RPC error response 00:05:25.583 response: 00:05:25.583 { 00:05:25.583 "code": -19, 00:05:25.583 "message": "No such device" 00:05:25.583 } 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.583 [2024-11-02 11:17:25.862276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.583 11:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.841 11:17:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.841 { 00:05:25.841 "subsystems": [ 00:05:25.841 { 00:05:25.841 "subsystem": "fsdev", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "fsdev_set_opts", 00:05:25.841 "params": { 00:05:25.841 "fsdev_io_pool_size": 65535, 00:05:25.841 "fsdev_io_cache_size": 256 00:05:25.841 } 00:05:25.841 } 00:05:25.841 ] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "vfio_user_target", 00:05:25.841 "config": null 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "keyring", 00:05:25.841 "config": [] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "iobuf", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "iobuf_set_options", 00:05:25.841 "params": { 00:05:25.841 "small_pool_count": 8192, 00:05:25.841 "large_pool_count": 1024, 00:05:25.841 "small_bufsize": 8192, 00:05:25.841 "large_bufsize": 135168, 00:05:25.841 "enable_numa": false 00:05:25.841 } 00:05:25.841 } 00:05:25.841 ] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "sock", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "sock_set_default_impl", 00:05:25.841 "params": { 00:05:25.841 "impl_name": "posix" 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "sock_impl_set_options", 00:05:25.841 "params": { 00:05:25.841 "impl_name": "ssl", 00:05:25.841 "recv_buf_size": 4096, 00:05:25.841 "send_buf_size": 4096, 00:05:25.841 "enable_recv_pipe": true, 00:05:25.841 "enable_quickack": false, 00:05:25.841 "enable_placement_id": 0, 00:05:25.841 "enable_zerocopy_send_server": true, 00:05:25.841 "enable_zerocopy_send_client": false, 00:05:25.841 "zerocopy_threshold": 0, 00:05:25.841 "tls_version": 0, 00:05:25.841 "enable_ktls": false 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "sock_impl_set_options", 00:05:25.841 "params": { 00:05:25.841 "impl_name": "posix", 00:05:25.841 "recv_buf_size": 2097152, 00:05:25.841 "send_buf_size": 2097152, 00:05:25.841 "enable_recv_pipe": true, 00:05:25.841 "enable_quickack": false, 00:05:25.841 "enable_placement_id": 0, 00:05:25.841 "enable_zerocopy_send_server": true, 00:05:25.841 "enable_zerocopy_send_client": false, 00:05:25.841 "zerocopy_threshold": 0, 00:05:25.841 "tls_version": 0, 00:05:25.841 "enable_ktls": false 00:05:25.841 } 00:05:25.841 } 00:05:25.841 ] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "vmd", 00:05:25.841 "config": [] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "accel", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "accel_set_options", 00:05:25.841 "params": { 00:05:25.841 "small_cache_size": 128, 00:05:25.841 "large_cache_size": 16, 00:05:25.841 "task_count": 2048, 00:05:25.841 "sequence_count": 2048, 00:05:25.841 "buf_count": 2048 00:05:25.841 } 00:05:25.841 } 00:05:25.841 ] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "bdev", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "bdev_set_options", 00:05:25.841 "params": { 00:05:25.841 "bdev_io_pool_size": 65535, 00:05:25.841 "bdev_io_cache_size": 256, 00:05:25.841 "bdev_auto_examine": true, 00:05:25.841 "iobuf_small_cache_size": 128, 00:05:25.841 "iobuf_large_cache_size": 16 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "bdev_raid_set_options", 00:05:25.841 "params": { 00:05:25.841 "process_window_size_kb": 1024, 00:05:25.841 "process_max_bandwidth_mb_sec": 0 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "bdev_iscsi_set_options", 00:05:25.841 "params": { 00:05:25.841 "timeout_sec": 30 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "bdev_nvme_set_options", 00:05:25.841 "params": { 00:05:25.841 "action_on_timeout": "none", 00:05:25.841 "timeout_us": 0, 00:05:25.841 "timeout_admin_us": 0, 00:05:25.841 "keep_alive_timeout_ms": 10000, 00:05:25.841 "arbitration_burst": 0, 00:05:25.841 "low_priority_weight": 0, 00:05:25.841 "medium_priority_weight": 0, 00:05:25.841 "high_priority_weight": 0, 00:05:25.841 "nvme_adminq_poll_period_us": 10000, 00:05:25.841 "nvme_ioq_poll_period_us": 0, 00:05:25.841 "io_queue_requests": 0, 00:05:25.841 "delay_cmd_submit": true, 00:05:25.841 "transport_retry_count": 4, 00:05:25.841 "bdev_retry_count": 3, 00:05:25.841 "transport_ack_timeout": 0, 00:05:25.841 "ctrlr_loss_timeout_sec": 0, 00:05:25.841 "reconnect_delay_sec": 0, 00:05:25.841 "fast_io_fail_timeout_sec": 0, 00:05:25.841 "disable_auto_failback": false, 00:05:25.841 "generate_uuids": false, 00:05:25.841 "transport_tos": 0, 00:05:25.841 "nvme_error_stat": false, 00:05:25.841 "rdma_srq_size": 0, 00:05:25.841 "io_path_stat": false, 00:05:25.841 "allow_accel_sequence": false, 00:05:25.841 "rdma_max_cq_size": 0, 00:05:25.841 "rdma_cm_event_timeout_ms": 0, 00:05:25.841 "dhchap_digests": [ 00:05:25.841 "sha256", 00:05:25.841 "sha384", 00:05:25.841 "sha512" 00:05:25.841 ], 00:05:25.841 "dhchap_dhgroups": [ 00:05:25.841 "null", 00:05:25.841 "ffdhe2048", 00:05:25.841 "ffdhe3072", 00:05:25.841 "ffdhe4096", 00:05:25.841 "ffdhe6144", 00:05:25.841 "ffdhe8192" 00:05:25.841 ] 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "bdev_nvme_set_hotplug", 00:05:25.841 "params": { 00:05:25.841 "period_us": 100000, 00:05:25.841 "enable": false 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "bdev_wait_for_examine" 00:05:25.841 } 00:05:25.841 ] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "scsi", 00:05:25.841 "config": null 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "scheduler", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "framework_set_scheduler", 00:05:25.841 "params": { 00:05:25.841 "name": "static" 00:05:25.841 } 00:05:25.841 } 00:05:25.841 ] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "vhost_scsi", 00:05:25.841 "config": [] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "vhost_blk", 00:05:25.841 "config": [] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "ublk", 00:05:25.841 "config": [] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "nbd", 00:05:25.841 "config": [] 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "subsystem": "nvmf", 00:05:25.841 "config": [ 00:05:25.841 { 00:05:25.841 "method": "nvmf_set_config", 00:05:25.841 "params": { 00:05:25.841 "discovery_filter": "match_any", 00:05:25.841 "admin_cmd_passthru": { 00:05:25.841 "identify_ctrlr": false 00:05:25.841 }, 00:05:25.841 "dhchap_digests": [ 00:05:25.841 "sha256", 00:05:25.841 "sha384", 00:05:25.841 "sha512" 00:05:25.841 ], 00:05:25.841 "dhchap_dhgroups": [ 00:05:25.841 "null", 00:05:25.841 "ffdhe2048", 00:05:25.841 "ffdhe3072", 00:05:25.841 "ffdhe4096", 00:05:25.841 "ffdhe6144", 00:05:25.841 "ffdhe8192" 00:05:25.841 ] 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "nvmf_set_max_subsystems", 00:05:25.841 "params": { 00:05:25.841 "max_subsystems": 1024 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "nvmf_set_crdt", 00:05:25.841 "params": { 00:05:25.841 "crdt1": 0, 00:05:25.841 "crdt2": 0, 00:05:25.841 "crdt3": 0 00:05:25.841 } 00:05:25.841 }, 00:05:25.841 { 00:05:25.841 "method": "nvmf_create_transport", 00:05:25.841 "params": { 00:05:25.841 "trtype": "TCP", 00:05:25.841 "max_queue_depth": 128, 00:05:25.841 "max_io_qpairs_per_ctrlr": 127, 00:05:25.842 "in_capsule_data_size": 4096, 00:05:25.842 "max_io_size": 131072, 00:05:25.842 "io_unit_size": 131072, 00:05:25.842 "max_aq_depth": 128, 00:05:25.842 "num_shared_buffers": 511, 00:05:25.842 "buf_cache_size": 4294967295, 00:05:25.842 "dif_insert_or_strip": false, 00:05:25.842 "zcopy": false, 00:05:25.842 "c2h_success": true, 00:05:25.842 "sock_priority": 0, 00:05:25.842 "abort_timeout_sec": 1, 00:05:25.842 "ack_timeout": 0, 00:05:25.842 "data_wr_pool_size": 0 00:05:25.842 } 00:05:25.842 } 00:05:25.842 ] 00:05:25.842 }, 00:05:25.842 { 00:05:25.842 "subsystem": "iscsi", 00:05:25.842 "config": [ 00:05:25.842 { 00:05:25.842 "method": "iscsi_set_options", 00:05:25.842 "params": { 00:05:25.842 "node_base": "iqn.2016-06.io.spdk", 00:05:25.842 "max_sessions": 128, 00:05:25.842 "max_connections_per_session": 2, 00:05:25.842 "max_queue_depth": 64, 00:05:25.842 "default_time2wait": 2, 00:05:25.842 "default_time2retain": 20, 00:05:25.842 "first_burst_length": 8192, 00:05:25.842 "immediate_data": true, 00:05:25.842 "allow_duplicated_isid": false, 00:05:25.842 "error_recovery_level": 0, 00:05:25.842 "nop_timeout": 60, 00:05:25.842 "nop_in_interval": 30, 00:05:25.842 "disable_chap": false, 00:05:25.842 "require_chap": false, 00:05:25.842 "mutual_chap": false, 00:05:25.842 "chap_group": 0, 00:05:25.842 "max_large_datain_per_connection": 64, 00:05:25.842 "max_r2t_per_connection": 4, 00:05:25.842 "pdu_pool_size": 36864, 00:05:25.842 "immediate_data_pool_size": 16384, 00:05:25.842 "data_out_pool_size": 2048 00:05:25.842 } 00:05:25.842 } 00:05:25.842 ] 00:05:25.842 } 00:05:25.842 ] 00:05:25.842 } 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3680778 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3680778 ']' 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3680778 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3680778 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3680778' 00:05:25.842 killing process with pid 3680778 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3680778 00:05:25.842 11:17:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3680778 00:05:26.099 11:17:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3680918 00:05:26.099 11:17:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.099 11:17:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3680918 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3680918 ']' 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3680918 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3680918 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3680918' 00:05:31.357 killing process with pid 3680918 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3680918 00:05:31.357 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3680918 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:31.615 00:05:31.615 real 0m6.499s 00:05:31.615 user 0m6.100s 00:05:31.615 sys 0m0.729s 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.615 ************************************ 00:05:31.615 END TEST skip_rpc_with_json 00:05:31.615 ************************************ 00:05:31.615 11:17:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:31.615 11:17:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.615 11:17:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.615 11:17:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.615 ************************************ 00:05:31.615 START TEST skip_rpc_with_delay 00:05:31.615 ************************************ 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.615 11:17:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:31.615 [2024-11-02 11:17:32.001119] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:31.615 11:17:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:31.615 11:17:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.615 11:17:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.615 11:17:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.615 00:05:31.615 real 0m0.074s 00:05:31.615 user 0m0.053s 00:05:31.615 sys 0m0.021s 00:05:31.615 11:17:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.615 11:17:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:31.615 ************************************ 00:05:31.615 END TEST skip_rpc_with_delay 00:05:31.615 ************************************ 00:05:31.874 11:17:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.874 11:17:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.874 11:17:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.874 11:17:32 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.874 11:17:32 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.874 11:17:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.874 ************************************ 00:05:31.874 START TEST exit_on_failed_rpc_init 00:05:31.874 ************************************ 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3681630 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3681630 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3681630 ']' 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.874 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.874 [2024-11-02 11:17:32.121788] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:31.874 [2024-11-02 11:17:32.121877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681630 ] 00:05:31.874 [2024-11-02 11:17:32.187127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.874 [2024-11-02 11:17:32.235956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.131 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.389 [2024-11-02 11:17:32.567853] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:32.389 [2024-11-02 11:17:32.567946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681761 ] 00:05:32.389 [2024-11-02 11:17:32.643018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.389 [2024-11-02 11:17:32.694031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.389 [2024-11-02 11:17:32.694163] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.389 [2024-11-02 11:17:32.694188] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.389 [2024-11-02 11:17:32.694202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3681630 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3681630 ']' 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3681630 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.389 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3681630 00:05:32.646 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.646 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.646 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3681630' 00:05:32.646 killing process with pid 3681630 00:05:32.646 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3681630 00:05:32.646 11:17:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3681630 00:05:32.904 00:05:32.904 real 0m1.111s 00:05:32.904 user 0m1.202s 00:05:32.904 sys 0m0.450s 00:05:32.904 11:17:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.904 11:17:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.904 ************************************ 00:05:32.904 END TEST exit_on_failed_rpc_init 00:05:32.904 ************************************ 00:05:32.904 11:17:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.904 00:05:32.904 real 0m13.452s 00:05:32.904 user 0m12.599s 00:05:32.904 sys 0m1.749s 00:05:32.904 11:17:33 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.904 11:17:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.904 ************************************ 00:05:32.904 END TEST skip_rpc 00:05:32.904 ************************************ 00:05:32.904 11:17:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.904 11:17:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.904 11:17:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.904 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.904 ************************************ 00:05:32.904 START TEST rpc_client 00:05:32.904 ************************************ 00:05:32.904 11:17:33 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.904 * Looking for test storage... 00:05:32.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:32.904 11:17:33 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.162 11:17:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.162 --rc genhtml_branch_coverage=1 00:05:33.162 --rc genhtml_function_coverage=1 00:05:33.162 --rc genhtml_legend=1 00:05:33.162 --rc geninfo_all_blocks=1 00:05:33.162 --rc geninfo_unexecuted_blocks=1 00:05:33.162 00:05:33.162 ' 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.162 --rc genhtml_branch_coverage=1 00:05:33.162 --rc genhtml_function_coverage=1 00:05:33.162 --rc genhtml_legend=1 00:05:33.162 --rc geninfo_all_blocks=1 00:05:33.162 --rc geninfo_unexecuted_blocks=1 00:05:33.162 00:05:33.162 ' 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.162 --rc genhtml_branch_coverage=1 00:05:33.162 --rc genhtml_function_coverage=1 00:05:33.162 --rc genhtml_legend=1 00:05:33.162 --rc geninfo_all_blocks=1 00:05:33.162 --rc geninfo_unexecuted_blocks=1 00:05:33.162 00:05:33.162 ' 00:05:33.162 11:17:33 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.162 --rc genhtml_branch_coverage=1 00:05:33.162 --rc genhtml_function_coverage=1 00:05:33.162 --rc genhtml_legend=1 00:05:33.162 --rc geninfo_all_blocks=1 00:05:33.162 --rc geninfo_unexecuted_blocks=1 00:05:33.162 00:05:33.162 ' 00:05:33.162 11:17:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:33.162 OK 00:05:33.162 11:17:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.162 00:05:33.162 real 0m0.158s 00:05:33.162 user 0m0.104s 00:05:33.162 sys 0m0.063s 00:05:33.163 11:17:33 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.163 11:17:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.163 ************************************ 00:05:33.163 END TEST rpc_client 00:05:33.163 ************************************ 00:05:33.163 11:17:33 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.163 11:17:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.163 11:17:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.163 11:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.163 ************************************ 00:05:33.163 START TEST json_config 00:05:33.163 ************************************ 00:05:33.163 11:17:33 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.163 11:17:33 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.163 11:17:33 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.163 11:17:33 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.421 11:17:33 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.421 11:17:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.421 11:17:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.421 11:17:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.421 11:17:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.421 11:17:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.421 11:17:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:33.421 11:17:33 json_config -- scripts/common.sh@345 -- # : 1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.421 11:17:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.421 11:17:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@353 -- # local d=1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.421 11:17:33 json_config -- scripts/common.sh@355 -- # echo 1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.421 11:17:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@353 -- # local d=2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.421 11:17:33 json_config -- scripts/common.sh@355 -- # echo 2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.421 11:17:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.421 11:17:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.421 11:17:33 json_config -- scripts/common.sh@368 -- # return 0 00:05:33.421 11:17:33 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.421 11:17:33 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.421 --rc genhtml_branch_coverage=1 00:05:33.421 --rc genhtml_function_coverage=1 00:05:33.421 --rc genhtml_legend=1 00:05:33.422 --rc geninfo_all_blocks=1 00:05:33.422 --rc geninfo_unexecuted_blocks=1 00:05:33.422 00:05:33.422 ' 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.422 --rc genhtml_branch_coverage=1 00:05:33.422 --rc genhtml_function_coverage=1 00:05:33.422 --rc genhtml_legend=1 00:05:33.422 --rc geninfo_all_blocks=1 00:05:33.422 --rc geninfo_unexecuted_blocks=1 00:05:33.422 00:05:33.422 ' 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.422 --rc genhtml_branch_coverage=1 00:05:33.422 --rc genhtml_function_coverage=1 00:05:33.422 --rc genhtml_legend=1 00:05:33.422 --rc geninfo_all_blocks=1 00:05:33.422 --rc geninfo_unexecuted_blocks=1 00:05:33.422 00:05:33.422 ' 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.422 --rc genhtml_branch_coverage=1 00:05:33.422 --rc genhtml_function_coverage=1 00:05:33.422 --rc genhtml_legend=1 00:05:33.422 --rc geninfo_all_blocks=1 00:05:33.422 --rc geninfo_unexecuted_blocks=1 00:05:33.422 00:05:33.422 ' 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.422 11:17:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.422 11:17:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.422 11:17:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.422 11:17:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.422 11:17:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.422 11:17:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.422 11:17:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.422 11:17:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.422 11:17:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@51 -- # : 0 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.422 11:17:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:33.422 INFO: JSON configuration test init 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.422 11:17:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:33.422 11:17:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:33.422 11:17:33 json_config -- json_config/common.sh@10 -- # shift 00:05:33.422 11:17:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.422 11:17:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.422 11:17:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.422 11:17:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.422 11:17:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.422 11:17:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3682019 00:05:33.422 11:17:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:33.422 11:17:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.422 Waiting for target to run... 00:05:33.422 11:17:33 json_config -- json_config/common.sh@25 -- # waitforlisten 3682019 /var/tmp/spdk_tgt.sock 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@833 -- # '[' -z 3682019 ']' 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.422 11:17:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.422 [2024-11-02 11:17:33.660817] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:33.422 [2024-11-02 11:17:33.660910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682019 ] 00:05:33.988 [2024-11-02 11:17:34.169473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.988 [2024-11-02 11:17:34.214701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.246 11:17:34 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.246 11:17:34 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:34.246 11:17:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:34.246 00:05:34.246 11:17:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:34.246 11:17:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:34.246 11:17:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.246 11:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.246 11:17:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:34.246 11:17:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:34.246 11:17:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.246 11:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.503 11:17:34 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:34.503 11:17:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:34.503 11:17:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.783 11:17:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:37.783 11:17:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.784 11:17:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.784 11:17:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:37.784 11:17:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:37.784 11:17:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@54 -- # sort 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:37.784 11:17:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.784 11:17:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:37.784 11:17:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.784 11:17:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:37.784 11:17:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.784 11:17:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.041 MallocForNvmf0 00:05:38.041 11:17:38 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.041 11:17:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.298 MallocForNvmf1 00:05:38.556 11:17:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.556 11:17:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.556 [2024-11-02 11:17:38.956512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.814 11:17:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.814 11:17:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.072 11:17:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.072 11:17:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.330 11:17:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.330 11:17:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.587 11:17:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.587 11:17:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.845 [2024-11-02 11:17:40.044131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:39.845 11:17:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:39.845 11:17:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.845 11:17:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 11:17:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:39.845 11:17:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.845 11:17:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.845 11:17:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:39.845 11:17:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.845 11:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.102 MallocBdevForConfigChangeCheck 00:05:40.102 11:17:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:40.102 11:17:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.102 11:17:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.102 11:17:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:40.102 11:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.667 11:17:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:40.667 INFO: shutting down applications... 00:05:40.667 11:17:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:40.667 11:17:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:40.667 11:17:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:40.667 11:17:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:42.563 Calling clear_iscsi_subsystem 00:05:42.563 Calling clear_nvmf_subsystem 00:05:42.563 Calling clear_nbd_subsystem 00:05:42.563 Calling clear_ublk_subsystem 00:05:42.563 Calling clear_vhost_blk_subsystem 00:05:42.563 Calling clear_vhost_scsi_subsystem 00:05:42.563 Calling clear_bdev_subsystem 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@352 -- # break 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:42.563 11:17:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:42.563 11:17:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:42.563 11:17:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.563 11:17:42 json_config -- json_config/common.sh@35 -- # [[ -n 3682019 ]] 00:05:42.563 11:17:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3682019 00:05:42.563 11:17:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.563 11:17:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.563 11:17:42 json_config -- json_config/common.sh@41 -- # kill -0 3682019 00:05:42.563 11:17:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.164 11:17:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.164 11:17:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.164 11:17:43 json_config -- json_config/common.sh@41 -- # kill -0 3682019 00:05:43.164 11:17:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.164 11:17:43 json_config -- json_config/common.sh@43 -- # break 00:05:43.164 11:17:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.164 11:17:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.164 SPDK target shutdown done 00:05:43.164 11:17:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:43.164 INFO: relaunching applications... 00:05:43.164 11:17:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.164 11:17:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:43.164 11:17:43 json_config -- json_config/common.sh@10 -- # shift 00:05:43.164 11:17:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.164 11:17:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.164 11:17:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.164 11:17:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.164 11:17:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.164 11:17:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3683225 00:05:43.164 11:17:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.164 11:17:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.164 Waiting for target to run... 00:05:43.164 11:17:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3683225 /var/tmp/spdk_tgt.sock 00:05:43.164 11:17:43 json_config -- common/autotest_common.sh@833 -- # '[' -z 3683225 ']' 00:05:43.164 11:17:43 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.164 11:17:43 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.164 11:17:43 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.164 11:17:43 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.164 11:17:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.164 [2024-11-02 11:17:43.453017] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:43.164 [2024-11-02 11:17:43.453111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683225 ] 00:05:43.730 [2024-11-02 11:17:43.959100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.730 [2024-11-02 11:17:44.005178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.008 [2024-11-02 11:17:47.061800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.008 [2024-11-02 11:17:47.094287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.008 11:17:47 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.008 11:17:47 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:47.008 11:17:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.008 00:05:47.008 11:17:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:47.008 11:17:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:47.008 INFO: Checking if target configuration is the same... 00:05:47.008 11:17:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.008 11:17:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:47.008 11:17:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.008 + '[' 2 -ne 2 ']' 00:05:47.008 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:47.008 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:47.008 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.008 +++ basename /dev/fd/62 00:05:47.008 ++ mktemp /tmp/62.XXX 00:05:47.008 + tmp_file_1=/tmp/62.YFG 00:05:47.008 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.008 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:47.008 + tmp_file_2=/tmp/spdk_tgt_config.json.vnV 00:05:47.008 + ret=0 00:05:47.008 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.265 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.265 + diff -u /tmp/62.YFG /tmp/spdk_tgt_config.json.vnV 00:05:47.265 + echo 'INFO: JSON config files are the same' 00:05:47.265 INFO: JSON config files are the same 00:05:47.265 + rm /tmp/62.YFG /tmp/spdk_tgt_config.json.vnV 00:05:47.265 + exit 0 00:05:47.265 11:17:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:47.265 11:17:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:47.265 INFO: changing configuration and checking if this can be detected... 00:05:47.265 11:17:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.265 11:17:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.522 11:17:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.523 11:17:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:47.523 11:17:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.523 + '[' 2 -ne 2 ']' 00:05:47.523 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:47.523 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:47.523 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.523 +++ basename /dev/fd/62 00:05:47.523 ++ mktemp /tmp/62.XXX 00:05:47.523 + tmp_file_1=/tmp/62.Uwc 00:05:47.523 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.523 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:47.523 + tmp_file_2=/tmp/spdk_tgt_config.json.3zH 00:05:47.523 + ret=0 00:05:47.523 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.088 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.088 + diff -u /tmp/62.Uwc /tmp/spdk_tgt_config.json.3zH 00:05:48.088 + ret=1 00:05:48.088 + echo '=== Start of file: /tmp/62.Uwc ===' 00:05:48.088 + cat /tmp/62.Uwc 00:05:48.088 + echo '=== End of file: /tmp/62.Uwc ===' 00:05:48.088 + echo '' 00:05:48.088 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3zH ===' 00:05:48.088 + cat /tmp/spdk_tgt_config.json.3zH 00:05:48.088 + echo '=== End of file: /tmp/spdk_tgt_config.json.3zH ===' 00:05:48.088 + echo '' 00:05:48.088 + rm /tmp/62.Uwc /tmp/spdk_tgt_config.json.3zH 00:05:48.088 + exit 1 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:48.088 INFO: configuration change detected. 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 3683225 ]] 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 11:17:48 json_config -- json_config/json_config.sh@330 -- # killprocess 3683225 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@952 -- # '[' -z 3683225 ']' 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@956 -- # kill -0 3683225 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@957 -- # uname 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3683225 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3683225' 00:05:48.088 killing process with pid 3683225 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@971 -- # kill 3683225 00:05:48.088 11:17:48 json_config -- common/autotest_common.sh@976 -- # wait 3683225 00:05:49.986 11:17:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.986 11:17:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:49.986 11:17:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.986 11:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.986 11:17:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:49.986 11:17:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:49.986 INFO: Success 00:05:49.986 00:05:49.986 real 0m16.588s 00:05:49.986 user 0m18.726s 00:05:49.986 sys 0m2.223s 00:05:49.986 11:17:50 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.986 11:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.986 ************************************ 00:05:49.986 END TEST json_config 00:05:49.986 ************************************ 00:05:49.986 11:17:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.986 11:17:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.986 11:17:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.986 11:17:50 -- common/autotest_common.sh@10 -- # set +x 00:05:49.986 ************************************ 00:05:49.986 START TEST json_config_extra_key 00:05:49.986 ************************************ 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.986 --rc genhtml_branch_coverage=1 00:05:49.986 --rc genhtml_function_coverage=1 00:05:49.986 --rc genhtml_legend=1 00:05:49.986 --rc geninfo_all_blocks=1 00:05:49.986 --rc geninfo_unexecuted_blocks=1 00:05:49.986 00:05:49.986 ' 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.986 --rc genhtml_branch_coverage=1 00:05:49.986 --rc genhtml_function_coverage=1 00:05:49.986 --rc genhtml_legend=1 00:05:49.986 --rc geninfo_all_blocks=1 00:05:49.986 --rc geninfo_unexecuted_blocks=1 00:05:49.986 00:05:49.986 ' 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.986 --rc genhtml_branch_coverage=1 00:05:49.986 --rc genhtml_function_coverage=1 00:05:49.986 --rc genhtml_legend=1 00:05:49.986 --rc geninfo_all_blocks=1 00:05:49.986 --rc geninfo_unexecuted_blocks=1 00:05:49.986 00:05:49.986 ' 00:05:49.986 11:17:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.986 --rc genhtml_branch_coverage=1 00:05:49.986 --rc genhtml_function_coverage=1 00:05:49.986 --rc genhtml_legend=1 00:05:49.986 --rc geninfo_all_blocks=1 00:05:49.986 --rc geninfo_unexecuted_blocks=1 00:05:49.986 00:05:49.986 ' 00:05:49.986 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.986 11:17:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.986 11:17:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.986 11:17:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.987 11:17:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.987 11:17:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.987 11:17:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.987 11:17:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.987 11:17:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.987 INFO: launching applications... 00:05:49.987 11:17:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3684148 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.987 Waiting for target to run... 00:05:49.987 11:17:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3684148 /var/tmp/spdk_tgt.sock 00:05:49.987 11:17:50 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3684148 ']' 00:05:49.987 11:17:50 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.987 11:17:50 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.987 11:17:50 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.987 11:17:50 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.987 11:17:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.987 [2024-11-02 11:17:50.284192] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:49.987 [2024-11-02 11:17:50.284316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684148 ] 00:05:50.554 [2024-11-02 11:17:50.823124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.554 [2024-11-02 11:17:50.869300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.118 11:17:51 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.118 11:17:51 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:51.118 00:05:51.118 11:17:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:51.118 INFO: shutting down applications... 00:05:51.118 11:17:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3684148 ]] 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3684148 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3684148 00:05:51.118 11:17:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3684148 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.686 11:17:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.686 SPDK target shutdown done 00:05:51.686 11:17:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:51.686 Success 00:05:51.686 00:05:51.686 real 0m1.697s 00:05:51.686 user 0m1.526s 00:05:51.686 sys 0m0.626s 00:05:51.686 11:17:51 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.686 11:17:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.686 ************************************ 00:05:51.686 END TEST json_config_extra_key 00:05:51.686 ************************************ 00:05:51.686 11:17:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.686 11:17:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.686 11:17:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.686 11:17:51 -- common/autotest_common.sh@10 -- # set +x 00:05:51.686 ************************************ 00:05:51.686 START TEST alias_rpc 00:05:51.686 ************************************ 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.686 * Looking for test storage... 00:05:51.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.686 11:17:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.686 --rc genhtml_branch_coverage=1 00:05:51.686 --rc genhtml_function_coverage=1 00:05:51.686 --rc genhtml_legend=1 00:05:51.686 --rc geninfo_all_blocks=1 00:05:51.686 --rc geninfo_unexecuted_blocks=1 00:05:51.686 00:05:51.686 ' 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.686 --rc genhtml_branch_coverage=1 00:05:51.686 --rc genhtml_function_coverage=1 00:05:51.686 --rc genhtml_legend=1 00:05:51.686 --rc geninfo_all_blocks=1 00:05:51.686 --rc geninfo_unexecuted_blocks=1 00:05:51.686 00:05:51.686 ' 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.686 --rc genhtml_branch_coverage=1 00:05:51.686 --rc genhtml_function_coverage=1 00:05:51.686 --rc genhtml_legend=1 00:05:51.686 --rc geninfo_all_blocks=1 00:05:51.686 --rc geninfo_unexecuted_blocks=1 00:05:51.686 00:05:51.686 ' 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.686 --rc genhtml_branch_coverage=1 00:05:51.686 --rc genhtml_function_coverage=1 00:05:51.686 --rc genhtml_legend=1 00:05:51.686 --rc geninfo_all_blocks=1 00:05:51.686 --rc geninfo_unexecuted_blocks=1 00:05:51.686 00:05:51.686 ' 00:05:51.686 11:17:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.686 11:17:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3684458 00:05:51.686 11:17:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.686 11:17:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3684458 00:05:51.686 11:17:51 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3684458 ']' 00:05:51.687 11:17:51 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.687 11:17:51 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.687 11:17:51 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.687 11:17:51 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.687 11:17:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.687 [2024-11-02 11:17:52.024180] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:51.687 [2024-11-02 11:17:52.024303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684458 ] 00:05:51.944 [2024-11-02 11:17:52.089720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.944 [2024-11-02 11:17:52.136431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.202 11:17:52 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.202 11:17:52 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:52.202 11:17:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:52.459 11:17:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3684458 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3684458 ']' 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3684458 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3684458 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3684458' 00:05:52.459 killing process with pid 3684458 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@971 -- # kill 3684458 00:05:52.459 11:17:52 alias_rpc -- common/autotest_common.sh@976 -- # wait 3684458 00:05:53.026 00:05:53.026 real 0m1.304s 00:05:53.026 user 0m1.421s 00:05:53.026 sys 0m0.441s 00:05:53.026 11:17:53 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.026 11:17:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.026 ************************************ 00:05:53.026 END TEST alias_rpc 00:05:53.026 ************************************ 00:05:53.026 11:17:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:53.026 11:17:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:53.026 11:17:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.026 11:17:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.026 11:17:53 -- common/autotest_common.sh@10 -- # set +x 00:05:53.026 ************************************ 00:05:53.026 START TEST spdkcli_tcp 00:05:53.026 ************************************ 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:53.026 * Looking for test storage... 00:05:53.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.026 11:17:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.026 --rc genhtml_branch_coverage=1 00:05:53.026 --rc genhtml_function_coverage=1 00:05:53.026 --rc genhtml_legend=1 00:05:53.026 --rc geninfo_all_blocks=1 00:05:53.026 --rc geninfo_unexecuted_blocks=1 00:05:53.026 00:05:53.026 ' 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.026 --rc genhtml_branch_coverage=1 00:05:53.026 --rc genhtml_function_coverage=1 00:05:53.026 --rc genhtml_legend=1 00:05:53.026 --rc geninfo_all_blocks=1 00:05:53.026 --rc geninfo_unexecuted_blocks=1 00:05:53.026 00:05:53.026 ' 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.026 --rc genhtml_branch_coverage=1 00:05:53.026 --rc genhtml_function_coverage=1 00:05:53.026 --rc genhtml_legend=1 00:05:53.026 --rc geninfo_all_blocks=1 00:05:53.026 --rc geninfo_unexecuted_blocks=1 00:05:53.026 00:05:53.026 ' 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.026 --rc genhtml_branch_coverage=1 00:05:53.026 --rc genhtml_function_coverage=1 00:05:53.026 --rc genhtml_legend=1 00:05:53.026 --rc geninfo_all_blocks=1 00:05:53.026 --rc geninfo_unexecuted_blocks=1 00:05:53.026 00:05:53.026 ' 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3684658 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:53.026 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3684658 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3684658 ']' 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.026 11:17:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.026 [2024-11-02 11:17:53.379318] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:53.026 [2024-11-02 11:17:53.379414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684658 ] 00:05:53.285 [2024-11-02 11:17:53.450054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.285 [2024-11-02 11:17:53.502276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.285 [2024-11-02 11:17:53.502282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.543 11:17:53 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.543 11:17:53 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:53.543 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3684760 00:05:53.543 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:53.543 11:17:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:53.800 [ 00:05:53.800 "bdev_malloc_delete", 00:05:53.800 "bdev_malloc_create", 00:05:53.800 "bdev_null_resize", 00:05:53.800 "bdev_null_delete", 00:05:53.800 "bdev_null_create", 00:05:53.800 "bdev_nvme_cuse_unregister", 00:05:53.800 "bdev_nvme_cuse_register", 00:05:53.800 "bdev_opal_new_user", 00:05:53.800 "bdev_opal_set_lock_state", 00:05:53.800 "bdev_opal_delete", 00:05:53.800 "bdev_opal_get_info", 00:05:53.800 "bdev_opal_create", 00:05:53.800 "bdev_nvme_opal_revert", 00:05:53.800 "bdev_nvme_opal_init", 00:05:53.800 "bdev_nvme_send_cmd", 00:05:53.800 "bdev_nvme_set_keys", 00:05:53.800 "bdev_nvme_get_path_iostat", 00:05:53.800 "bdev_nvme_get_mdns_discovery_info", 00:05:53.800 "bdev_nvme_stop_mdns_discovery", 00:05:53.800 "bdev_nvme_start_mdns_discovery", 00:05:53.800 "bdev_nvme_set_multipath_policy", 00:05:53.800 "bdev_nvme_set_preferred_path", 00:05:53.800 "bdev_nvme_get_io_paths", 00:05:53.800 "bdev_nvme_remove_error_injection", 00:05:53.800 "bdev_nvme_add_error_injection", 00:05:53.800 "bdev_nvme_get_discovery_info", 00:05:53.800 "bdev_nvme_stop_discovery", 00:05:53.800 "bdev_nvme_start_discovery", 00:05:53.800 "bdev_nvme_get_controller_health_info", 00:05:53.800 "bdev_nvme_disable_controller", 00:05:53.800 "bdev_nvme_enable_controller", 00:05:53.800 "bdev_nvme_reset_controller", 00:05:53.800 "bdev_nvme_get_transport_statistics", 00:05:53.800 "bdev_nvme_apply_firmware", 00:05:53.800 "bdev_nvme_detach_controller", 00:05:53.800 "bdev_nvme_get_controllers", 00:05:53.800 "bdev_nvme_attach_controller", 00:05:53.801 "bdev_nvme_set_hotplug", 00:05:53.801 "bdev_nvme_set_options", 00:05:53.801 "bdev_passthru_delete", 00:05:53.801 "bdev_passthru_create", 00:05:53.801 "bdev_lvol_set_parent_bdev", 00:05:53.801 "bdev_lvol_set_parent", 00:05:53.801 "bdev_lvol_check_shallow_copy", 00:05:53.801 "bdev_lvol_start_shallow_copy", 00:05:53.801 "bdev_lvol_grow_lvstore", 00:05:53.801 "bdev_lvol_get_lvols", 00:05:53.801 "bdev_lvol_get_lvstores", 00:05:53.801 "bdev_lvol_delete", 00:05:53.801 "bdev_lvol_set_read_only", 00:05:53.801 "bdev_lvol_resize", 00:05:53.801 "bdev_lvol_decouple_parent", 00:05:53.801 "bdev_lvol_inflate", 00:05:53.801 "bdev_lvol_rename", 00:05:53.801 "bdev_lvol_clone_bdev", 00:05:53.801 "bdev_lvol_clone", 00:05:53.801 "bdev_lvol_snapshot", 00:05:53.801 "bdev_lvol_create", 00:05:53.801 "bdev_lvol_delete_lvstore", 00:05:53.801 "bdev_lvol_rename_lvstore", 00:05:53.801 "bdev_lvol_create_lvstore", 00:05:53.801 "bdev_raid_set_options", 00:05:53.801 "bdev_raid_remove_base_bdev", 00:05:53.801 "bdev_raid_add_base_bdev", 00:05:53.801 "bdev_raid_delete", 00:05:53.801 "bdev_raid_create", 00:05:53.801 "bdev_raid_get_bdevs", 00:05:53.801 "bdev_error_inject_error", 00:05:53.801 "bdev_error_delete", 00:05:53.801 "bdev_error_create", 00:05:53.801 "bdev_split_delete", 00:05:53.801 "bdev_split_create", 00:05:53.801 "bdev_delay_delete", 00:05:53.801 "bdev_delay_create", 00:05:53.801 "bdev_delay_update_latency", 00:05:53.801 "bdev_zone_block_delete", 00:05:53.801 "bdev_zone_block_create", 00:05:53.801 "blobfs_create", 00:05:53.801 "blobfs_detect", 00:05:53.801 "blobfs_set_cache_size", 00:05:53.801 "bdev_aio_delete", 00:05:53.801 "bdev_aio_rescan", 00:05:53.801 "bdev_aio_create", 00:05:53.801 "bdev_ftl_set_property", 00:05:53.801 "bdev_ftl_get_properties", 00:05:53.801 "bdev_ftl_get_stats", 00:05:53.801 "bdev_ftl_unmap", 00:05:53.801 "bdev_ftl_unload", 00:05:53.801 "bdev_ftl_delete", 00:05:53.801 "bdev_ftl_load", 00:05:53.801 "bdev_ftl_create", 00:05:53.801 "bdev_virtio_attach_controller", 00:05:53.801 "bdev_virtio_scsi_get_devices", 00:05:53.801 "bdev_virtio_detach_controller", 00:05:53.801 "bdev_virtio_blk_set_hotplug", 00:05:53.801 "bdev_iscsi_delete", 00:05:53.801 "bdev_iscsi_create", 00:05:53.801 "bdev_iscsi_set_options", 00:05:53.801 "accel_error_inject_error", 00:05:53.801 "ioat_scan_accel_module", 00:05:53.801 "dsa_scan_accel_module", 00:05:53.801 "iaa_scan_accel_module", 00:05:53.801 "vfu_virtio_create_fs_endpoint", 00:05:53.801 "vfu_virtio_create_scsi_endpoint", 00:05:53.801 "vfu_virtio_scsi_remove_target", 00:05:53.801 "vfu_virtio_scsi_add_target", 00:05:53.801 "vfu_virtio_create_blk_endpoint", 00:05:53.801 "vfu_virtio_delete_endpoint", 00:05:53.801 "keyring_file_remove_key", 00:05:53.801 "keyring_file_add_key", 00:05:53.801 "keyring_linux_set_options", 00:05:53.801 "fsdev_aio_delete", 00:05:53.801 "fsdev_aio_create", 00:05:53.801 "iscsi_get_histogram", 00:05:53.801 "iscsi_enable_histogram", 00:05:53.801 "iscsi_set_options", 00:05:53.801 "iscsi_get_auth_groups", 00:05:53.801 "iscsi_auth_group_remove_secret", 00:05:53.801 "iscsi_auth_group_add_secret", 00:05:53.801 "iscsi_delete_auth_group", 00:05:53.801 "iscsi_create_auth_group", 00:05:53.801 "iscsi_set_discovery_auth", 00:05:53.801 "iscsi_get_options", 00:05:53.801 "iscsi_target_node_request_logout", 00:05:53.801 "iscsi_target_node_set_redirect", 00:05:53.801 "iscsi_target_node_set_auth", 00:05:53.801 "iscsi_target_node_add_lun", 00:05:53.801 "iscsi_get_stats", 00:05:53.801 "iscsi_get_connections", 00:05:53.801 "iscsi_portal_group_set_auth", 00:05:53.801 "iscsi_start_portal_group", 00:05:53.801 "iscsi_delete_portal_group", 00:05:53.801 "iscsi_create_portal_group", 00:05:53.801 "iscsi_get_portal_groups", 00:05:53.801 "iscsi_delete_target_node", 00:05:53.801 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.801 "iscsi_target_node_add_pg_ig_maps", 00:05:53.801 "iscsi_create_target_node", 00:05:53.801 "iscsi_get_target_nodes", 00:05:53.801 "iscsi_delete_initiator_group", 00:05:53.801 "iscsi_initiator_group_remove_initiators", 00:05:53.801 "iscsi_initiator_group_add_initiators", 00:05:53.801 "iscsi_create_initiator_group", 00:05:53.801 "iscsi_get_initiator_groups", 00:05:53.801 "nvmf_set_crdt", 00:05:53.801 "nvmf_set_config", 00:05:53.801 "nvmf_set_max_subsystems", 00:05:53.801 "nvmf_stop_mdns_prr", 00:05:53.801 "nvmf_publish_mdns_prr", 00:05:53.801 "nvmf_subsystem_get_listeners", 00:05:53.801 "nvmf_subsystem_get_qpairs", 00:05:53.801 "nvmf_subsystem_get_controllers", 00:05:53.801 "nvmf_get_stats", 00:05:53.801 "nvmf_get_transports", 00:05:53.801 "nvmf_create_transport", 00:05:53.801 "nvmf_get_targets", 00:05:53.801 "nvmf_delete_target", 00:05:53.801 "nvmf_create_target", 00:05:53.801 "nvmf_subsystem_allow_any_host", 00:05:53.801 "nvmf_subsystem_set_keys", 00:05:53.801 "nvmf_subsystem_remove_host", 00:05:53.801 "nvmf_subsystem_add_host", 00:05:53.801 "nvmf_ns_remove_host", 00:05:53.801 "nvmf_ns_add_host", 00:05:53.801 "nvmf_subsystem_remove_ns", 00:05:53.801 "nvmf_subsystem_set_ns_ana_group", 00:05:53.801 "nvmf_subsystem_add_ns", 00:05:53.801 "nvmf_subsystem_listener_set_ana_state", 00:05:53.801 "nvmf_discovery_get_referrals", 00:05:53.801 "nvmf_discovery_remove_referral", 00:05:53.801 "nvmf_discovery_add_referral", 00:05:53.801 "nvmf_subsystem_remove_listener", 00:05:53.801 "nvmf_subsystem_add_listener", 00:05:53.801 "nvmf_delete_subsystem", 00:05:53.801 "nvmf_create_subsystem", 00:05:53.801 "nvmf_get_subsystems", 00:05:53.801 "env_dpdk_get_mem_stats", 00:05:53.801 "nbd_get_disks", 00:05:53.801 "nbd_stop_disk", 00:05:53.801 "nbd_start_disk", 00:05:53.801 "ublk_recover_disk", 00:05:53.801 "ublk_get_disks", 00:05:53.801 "ublk_stop_disk", 00:05:53.801 "ublk_start_disk", 00:05:53.801 "ublk_destroy_target", 00:05:53.801 "ublk_create_target", 00:05:53.801 "virtio_blk_create_transport", 00:05:53.801 "virtio_blk_get_transports", 00:05:53.801 "vhost_controller_set_coalescing", 00:05:53.801 "vhost_get_controllers", 00:05:53.801 "vhost_delete_controller", 00:05:53.801 "vhost_create_blk_controller", 00:05:53.801 "vhost_scsi_controller_remove_target", 00:05:53.801 "vhost_scsi_controller_add_target", 00:05:53.801 "vhost_start_scsi_controller", 00:05:53.801 "vhost_create_scsi_controller", 00:05:53.801 "thread_set_cpumask", 00:05:53.801 "scheduler_set_options", 00:05:53.801 "framework_get_governor", 00:05:53.801 "framework_get_scheduler", 00:05:53.801 "framework_set_scheduler", 00:05:53.801 "framework_get_reactors", 00:05:53.801 "thread_get_io_channels", 00:05:53.801 "thread_get_pollers", 00:05:53.801 "thread_get_stats", 00:05:53.801 "framework_monitor_context_switch", 00:05:53.801 "spdk_kill_instance", 00:05:53.801 "log_enable_timestamps", 00:05:53.801 "log_get_flags", 00:05:53.801 "log_clear_flag", 00:05:53.801 "log_set_flag", 00:05:53.801 "log_get_level", 00:05:53.801 "log_set_level", 00:05:53.801 "log_get_print_level", 00:05:53.801 "log_set_print_level", 00:05:53.801 "framework_enable_cpumask_locks", 00:05:53.801 "framework_disable_cpumask_locks", 00:05:53.801 "framework_wait_init", 00:05:53.801 "framework_start_init", 00:05:53.801 "scsi_get_devices", 00:05:53.801 "bdev_get_histogram", 00:05:53.801 "bdev_enable_histogram", 00:05:53.801 "bdev_set_qos_limit", 00:05:53.801 "bdev_set_qd_sampling_period", 00:05:53.801 "bdev_get_bdevs", 00:05:53.801 "bdev_reset_iostat", 00:05:53.801 "bdev_get_iostat", 00:05:53.801 "bdev_examine", 00:05:53.801 "bdev_wait_for_examine", 00:05:53.801 "bdev_set_options", 00:05:53.801 "accel_get_stats", 00:05:53.801 "accel_set_options", 00:05:53.801 "accel_set_driver", 00:05:53.801 "accel_crypto_key_destroy", 00:05:53.801 "accel_crypto_keys_get", 00:05:53.801 "accel_crypto_key_create", 00:05:53.801 "accel_assign_opc", 00:05:53.801 "accel_get_module_info", 00:05:53.801 "accel_get_opc_assignments", 00:05:53.801 "vmd_rescan", 00:05:53.801 "vmd_remove_device", 00:05:53.801 "vmd_enable", 00:05:53.801 "sock_get_default_impl", 00:05:53.801 "sock_set_default_impl", 00:05:53.801 "sock_impl_set_options", 00:05:53.801 "sock_impl_get_options", 00:05:53.801 "iobuf_get_stats", 00:05:53.801 "iobuf_set_options", 00:05:53.801 "keyring_get_keys", 00:05:53.801 "vfu_tgt_set_base_path", 00:05:53.801 "framework_get_pci_devices", 00:05:53.801 "framework_get_config", 00:05:53.801 "framework_get_subsystems", 00:05:53.801 "fsdev_set_opts", 00:05:53.801 "fsdev_get_opts", 00:05:53.801 "trace_get_info", 00:05:53.801 "trace_get_tpoint_group_mask", 00:05:53.801 "trace_disable_tpoint_group", 00:05:53.801 "trace_enable_tpoint_group", 00:05:53.801 "trace_clear_tpoint_mask", 00:05:53.801 "trace_set_tpoint_mask", 00:05:53.801 "notify_get_notifications", 00:05:53.801 "notify_get_types", 00:05:53.801 "spdk_get_version", 00:05:53.801 "rpc_get_methods" 00:05:53.801 ] 00:05:53.801 11:17:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.801 11:17:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.801 11:17:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3684658 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3684658 ']' 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3684658 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.801 11:17:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3684658 00:05:53.802 11:17:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.802 11:17:54 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.802 11:17:54 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3684658' 00:05:53.802 killing process with pid 3684658 00:05:53.802 11:17:54 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3684658 00:05:53.802 11:17:54 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3684658 00:05:54.368 00:05:54.368 real 0m1.324s 00:05:54.368 user 0m2.387s 00:05:54.368 sys 0m0.484s 00:05:54.368 11:17:54 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.368 11:17:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 ************************************ 00:05:54.368 END TEST spdkcli_tcp 00:05:54.368 ************************************ 00:05:54.368 11:17:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.368 11:17:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.368 11:17:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.368 11:17:54 -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 ************************************ 00:05:54.368 START TEST dpdk_mem_utility 00:05:54.368 ************************************ 00:05:54.368 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.368 * Looking for test storage... 00:05:54.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:54.368 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.368 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.368 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.368 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:54.368 11:17:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.369 11:17:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.369 --rc genhtml_branch_coverage=1 00:05:54.369 --rc genhtml_function_coverage=1 00:05:54.369 --rc genhtml_legend=1 00:05:54.369 --rc geninfo_all_blocks=1 00:05:54.369 --rc geninfo_unexecuted_blocks=1 00:05:54.369 00:05:54.369 ' 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.369 --rc genhtml_branch_coverage=1 00:05:54.369 --rc genhtml_function_coverage=1 00:05:54.369 --rc genhtml_legend=1 00:05:54.369 --rc geninfo_all_blocks=1 00:05:54.369 --rc geninfo_unexecuted_blocks=1 00:05:54.369 00:05:54.369 ' 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.369 --rc genhtml_branch_coverage=1 00:05:54.369 --rc genhtml_function_coverage=1 00:05:54.369 --rc genhtml_legend=1 00:05:54.369 --rc geninfo_all_blocks=1 00:05:54.369 --rc geninfo_unexecuted_blocks=1 00:05:54.369 00:05:54.369 ' 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.369 --rc genhtml_branch_coverage=1 00:05:54.369 --rc genhtml_function_coverage=1 00:05:54.369 --rc genhtml_legend=1 00:05:54.369 --rc geninfo_all_blocks=1 00:05:54.369 --rc geninfo_unexecuted_blocks=1 00:05:54.369 00:05:54.369 ' 00:05:54.369 11:17:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.369 11:17:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3684871 00:05:54.369 11:17:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.369 11:17:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3684871 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3684871 ']' 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.369 11:17:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.369 [2024-11-02 11:17:54.757112] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:54.369 [2024-11-02 11:17:54.757212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684871 ] 00:05:54.628 [2024-11-02 11:17:54.833911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.628 [2024-11-02 11:17:54.886729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.886 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.886 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:54.886 11:17:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.886 11:17:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.886 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.887 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.887 { 00:05:54.887 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.887 } 00:05:54.887 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.887 11:17:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.887 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:54.887 1 heaps totaling size 810.000000 MiB 00:05:54.887 size: 810.000000 MiB heap id: 0 00:05:54.887 end heaps---------- 00:05:54.887 9 mempools totaling size 595.772034 MiB 00:05:54.887 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.887 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.887 size: 92.545471 MiB name: bdev_io_3684871 00:05:54.887 size: 50.003479 MiB name: msgpool_3684871 00:05:54.887 size: 36.509338 MiB name: fsdev_io_3684871 00:05:54.887 size: 21.763794 MiB name: PDU_Pool 00:05:54.887 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.887 size: 4.133484 MiB name: evtpool_3684871 00:05:54.887 size: 0.026123 MiB name: Session_Pool 00:05:54.887 end mempools------- 00:05:54.887 6 memzones totaling size 4.142822 MiB 00:05:54.887 size: 1.000366 MiB name: RG_ring_0_3684871 00:05:54.887 size: 1.000366 MiB name: RG_ring_1_3684871 00:05:54.887 size: 1.000366 MiB name: RG_ring_4_3684871 00:05:54.887 size: 1.000366 MiB name: RG_ring_5_3684871 00:05:54.887 size: 0.125366 MiB name: RG_ring_2_3684871 00:05:54.887 size: 0.015991 MiB name: RG_ring_3_3684871 00:05:54.887 end memzones------- 00:05:54.887 11:17:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.887 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:54.887 list of free elements. size: 10.862488 MiB 00:05:54.887 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:54.887 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:54.887 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:54.887 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:54.887 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:54.887 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:54.887 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:54.887 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:54.887 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:54.887 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:54.887 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:54.887 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:54.887 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:54.887 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:54.887 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:54.887 list of standard malloc elements. size: 199.218628 MiB 00:05:54.887 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:54.887 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:54.887 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:54.887 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:54.887 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.887 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.887 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:54.887 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.887 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:54.887 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:54.887 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:54.887 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:54.887 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:54.887 list of memzone associated elements. size: 599.918884 MiB 00:05:54.887 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:54.887 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.887 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:54.887 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.887 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:54.887 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3684871_0 00:05:54.887 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:54.887 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3684871_0 00:05:54.887 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:54.887 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3684871_0 00:05:54.887 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:54.887 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.887 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:54.887 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.887 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:54.887 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3684871_0 00:05:54.887 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:54.887 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3684871 00:05:54.887 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.887 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3684871 00:05:54.887 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:54.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.887 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:54.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.887 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:54.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.887 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:54.887 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.887 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:54.887 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3684871 00:05:54.887 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:54.887 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3684871 00:05:54.887 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:54.887 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3684871 00:05:54.887 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:54.887 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3684871 00:05:54.887 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:54.887 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3684871 00:05:54.887 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:54.887 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3684871 00:05:54.887 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:54.887 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.887 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:54.887 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.887 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:54.887 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.887 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:54.887 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3684871 00:05:54.887 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:54.887 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3684871 00:05:54.887 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:54.887 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.887 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:54.887 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.887 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:54.887 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3684871 00:05:54.887 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:54.887 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.887 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:54.887 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3684871 00:05:54.887 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:54.887 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3684871 00:05:54.888 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:54.888 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3684871 00:05:54.888 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:54.888 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.888 11:17:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.888 11:17:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3684871 00:05:54.888 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3684871 ']' 00:05:54.888 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3684871 00:05:54.888 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:54.888 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:54.888 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3684871 00:05:55.145 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.145 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.145 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3684871' 00:05:55.145 killing process with pid 3684871 00:05:55.145 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3684871 00:05:55.145 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3684871 00:05:55.404 00:05:55.404 real 0m1.145s 00:05:55.404 user 0m1.143s 00:05:55.404 sys 0m0.449s 00:05:55.404 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.404 11:17:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.404 ************************************ 00:05:55.404 END TEST dpdk_mem_utility 00:05:55.404 ************************************ 00:05:55.404 11:17:55 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:55.404 11:17:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.404 11:17:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.404 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.404 ************************************ 00:05:55.404 START TEST event 00:05:55.404 ************************************ 00:05:55.404 11:17:55 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:55.404 * Looking for test storage... 00:05:55.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.404 11:17:55 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.663 11:17:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.663 11:17:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.663 11:17:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.663 11:17:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.663 11:17:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.663 11:17:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.663 11:17:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.663 11:17:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.663 11:17:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.663 11:17:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.663 11:17:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.663 11:17:55 event -- scripts/common.sh@344 -- # case "$op" in 00:05:55.663 11:17:55 event -- scripts/common.sh@345 -- # : 1 00:05:55.663 11:17:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.663 11:17:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.663 11:17:55 event -- scripts/common.sh@365 -- # decimal 1 00:05:55.663 11:17:55 event -- scripts/common.sh@353 -- # local d=1 00:05:55.663 11:17:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.663 11:17:55 event -- scripts/common.sh@355 -- # echo 1 00:05:55.663 11:17:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.663 11:17:55 event -- scripts/common.sh@366 -- # decimal 2 00:05:55.663 11:17:55 event -- scripts/common.sh@353 -- # local d=2 00:05:55.663 11:17:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.663 11:17:55 event -- scripts/common.sh@355 -- # echo 2 00:05:55.663 11:17:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.663 11:17:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.663 11:17:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.663 11:17:55 event -- scripts/common.sh@368 -- # return 0 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.663 --rc genhtml_branch_coverage=1 00:05:55.663 --rc genhtml_function_coverage=1 00:05:55.663 --rc genhtml_legend=1 00:05:55.663 --rc geninfo_all_blocks=1 00:05:55.663 --rc geninfo_unexecuted_blocks=1 00:05:55.663 00:05:55.663 ' 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.663 --rc genhtml_branch_coverage=1 00:05:55.663 --rc genhtml_function_coverage=1 00:05:55.663 --rc genhtml_legend=1 00:05:55.663 --rc geninfo_all_blocks=1 00:05:55.663 --rc geninfo_unexecuted_blocks=1 00:05:55.663 00:05:55.663 ' 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.663 --rc genhtml_branch_coverage=1 00:05:55.663 --rc genhtml_function_coverage=1 00:05:55.663 --rc genhtml_legend=1 00:05:55.663 --rc geninfo_all_blocks=1 00:05:55.663 --rc geninfo_unexecuted_blocks=1 00:05:55.663 00:05:55.663 ' 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.663 --rc genhtml_branch_coverage=1 00:05:55.663 --rc genhtml_function_coverage=1 00:05:55.663 --rc genhtml_legend=1 00:05:55.663 --rc geninfo_all_blocks=1 00:05:55.663 --rc geninfo_unexecuted_blocks=1 00:05:55.663 00:05:55.663 ' 00:05:55.663 11:17:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:55.663 11:17:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.663 11:17:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:55.663 11:17:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.663 11:17:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.663 ************************************ 00:05:55.663 START TEST event_perf 00:05:55.663 ************************************ 00:05:55.663 11:17:55 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.663 Running I/O for 1 seconds...[2024-11-02 11:17:55.939224] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:55.663 [2024-11-02 11:17:55.939305] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685076 ] 00:05:55.663 [2024-11-02 11:17:56.007904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.663 [2024-11-02 11:17:56.059896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.663 [2024-11-02 11:17:56.059954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.663 [2024-11-02 11:17:56.060068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.663 [2024-11-02 11:17:56.060070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.035 Running I/O for 1 seconds... 00:05:57.035 lcore 0: 231100 00:05:57.035 lcore 1: 231100 00:05:57.035 lcore 2: 231099 00:05:57.035 lcore 3: 231100 00:05:57.035 done. 00:05:57.035 00:05:57.035 real 0m1.184s 00:05:57.035 user 0m4.103s 00:05:57.035 sys 0m0.076s 00:05:57.035 11:17:57 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.035 11:17:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.035 ************************************ 00:05:57.035 END TEST event_perf 00:05:57.035 ************************************ 00:05:57.035 11:17:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:57.035 11:17:57 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:57.035 11:17:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.035 11:17:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.035 ************************************ 00:05:57.035 START TEST event_reactor 00:05:57.035 ************************************ 00:05:57.035 11:17:57 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:57.035 [2024-11-02 11:17:57.168795] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:57.035 [2024-11-02 11:17:57.168849] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685293 ] 00:05:57.035 [2024-11-02 11:17:57.238915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.035 [2024-11-02 11:17:57.289533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.968 test_start 00:05:57.968 oneshot 00:05:57.968 tick 100 00:05:57.968 tick 100 00:05:57.968 tick 250 00:05:57.968 tick 100 00:05:57.968 tick 100 00:05:57.968 tick 100 00:05:57.968 tick 250 00:05:57.968 tick 500 00:05:57.968 tick 100 00:05:57.968 tick 100 00:05:57.968 tick 250 00:05:57.968 tick 100 00:05:57.968 tick 100 00:05:57.968 test_end 00:05:57.968 00:05:57.968 real 0m1.177s 00:05:57.968 user 0m1.105s 00:05:57.968 sys 0m0.068s 00:05:57.968 11:17:58 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.968 11:17:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:57.968 ************************************ 00:05:57.968 END TEST event_reactor 00:05:57.968 ************************************ 00:05:57.968 11:17:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.968 11:17:58 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:57.968 11:17:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.968 11:17:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.227 ************************************ 00:05:58.227 START TEST event_reactor_perf 00:05:58.227 ************************************ 00:05:58.227 11:17:58 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.227 [2024-11-02 11:17:58.393756] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:58.227 [2024-11-02 11:17:58.393809] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685500 ] 00:05:58.227 [2024-11-02 11:17:58.462341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.227 [2024-11-02 11:17:58.512910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.161 test_start 00:05:59.161 test_end 00:05:59.161 Performance: 356623 events per second 00:05:59.161 00:05:59.161 real 0m1.178s 00:05:59.161 user 0m1.106s 00:05:59.161 sys 0m0.068s 00:05:59.161 11:17:59 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.161 11:17:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.161 ************************************ 00:05:59.161 END TEST event_reactor_perf 00:05:59.161 ************************************ 00:05:59.419 11:17:59 event -- event/event.sh@49 -- # uname -s 00:05:59.419 11:17:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.419 11:17:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.419 11:17:59 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.419 11:17:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.419 11:17:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.419 ************************************ 00:05:59.419 START TEST event_scheduler 00:05:59.419 ************************************ 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.419 * Looking for test storage... 00:05:59.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.419 11:17:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.419 --rc genhtml_branch_coverage=1 00:05:59.419 --rc genhtml_function_coverage=1 00:05:59.419 --rc genhtml_legend=1 00:05:59.419 --rc geninfo_all_blocks=1 00:05:59.419 --rc geninfo_unexecuted_blocks=1 00:05:59.419 00:05:59.419 ' 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.419 --rc genhtml_branch_coverage=1 00:05:59.419 --rc genhtml_function_coverage=1 00:05:59.419 --rc genhtml_legend=1 00:05:59.419 --rc geninfo_all_blocks=1 00:05:59.419 --rc geninfo_unexecuted_blocks=1 00:05:59.419 00:05:59.419 ' 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.419 --rc genhtml_branch_coverage=1 00:05:59.419 --rc genhtml_function_coverage=1 00:05:59.419 --rc genhtml_legend=1 00:05:59.419 --rc geninfo_all_blocks=1 00:05:59.419 --rc geninfo_unexecuted_blocks=1 00:05:59.419 00:05:59.419 ' 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.419 --rc genhtml_branch_coverage=1 00:05:59.419 --rc genhtml_function_coverage=1 00:05:59.419 --rc genhtml_legend=1 00:05:59.419 --rc geninfo_all_blocks=1 00:05:59.419 --rc geninfo_unexecuted_blocks=1 00:05:59.419 00:05:59.419 ' 00:05:59.419 11:17:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.419 11:17:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3685686 00:05:59.419 11:17:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.419 11:17:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.419 11:17:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3685686 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3685686 ']' 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.419 11:17:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.419 [2024-11-02 11:17:59.805179] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:05:59.419 [2024-11-02 11:17:59.805292] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685686 ] 00:05:59.677 [2024-11-02 11:17:59.874167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.677 [2024-11-02 11:17:59.926854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.677 [2024-11-02 11:17:59.926915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.677 [2024-11-02 11:17:59.926983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.677 [2024-11-02 11:17:59.926985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:59.677 11:18:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.677 [2024-11-02 11:18:00.055975] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:59.677 [2024-11-02 11:18:00.056010] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:59.677 [2024-11-02 11:18:00.056026] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.677 [2024-11-02 11:18:00.056038] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.677 [2024-11-02 11:18:00.056048] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.677 11:18:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.677 11:18:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.935 [2024-11-02 11:18:00.151616] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.935 11:18:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.935 11:18:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.935 11:18:00 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.935 11:18:00 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.935 11:18:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.935 ************************************ 00:05:59.935 START TEST scheduler_create_thread 00:05:59.935 ************************************ 00:05:59.935 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:59.935 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.935 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.935 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.935 2 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 3 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 4 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 5 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 6 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 7 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 8 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 9 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 10 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.936 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.500 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.500 00:06:00.500 real 0m0.592s 00:06:00.500 user 0m0.011s 00:06:00.500 sys 0m0.003s 00:06:00.500 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.500 11:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.500 ************************************ 00:06:00.500 END TEST scheduler_create_thread 00:06:00.500 ************************************ 00:06:00.501 11:18:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.501 11:18:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3685686 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3685686 ']' 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3685686 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3685686 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3685686' 00:06:00.501 killing process with pid 3685686 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3685686 00:06:00.501 11:18:00 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3685686 00:06:01.066 [2024-11-02 11:18:01.252192] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.066 00:06:01.066 real 0m1.813s 00:06:01.066 user 0m2.534s 00:06:01.066 sys 0m0.346s 00:06:01.066 11:18:01 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.066 11:18:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.066 ************************************ 00:06:01.066 END TEST event_scheduler 00:06:01.066 ************************************ 00:06:01.066 11:18:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.066 11:18:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.066 11:18:01 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.066 11:18:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.066 11:18:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.324 ************************************ 00:06:01.324 START TEST app_repeat 00:06:01.324 ************************************ 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3685980 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3685980' 00:06:01.324 Process app_repeat pid: 3685980 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.324 spdk_app_start Round 0 00:06:01.324 11:18:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3685980 /var/tmp/spdk-nbd.sock 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3685980 ']' 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.324 11:18:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.324 [2024-11-02 11:18:01.511462] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:01.324 [2024-11-02 11:18:01.511544] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685980 ] 00:06:01.324 [2024-11-02 11:18:01.583196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.324 [2024-11-02 11:18:01.635135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.324 [2024-11-02 11:18:01.635140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.581 11:18:01 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.581 11:18:01 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:01.581 11:18:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.838 Malloc0 00:06:01.838 11:18:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.096 Malloc1 00:06:02.096 11:18:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.096 11:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.354 /dev/nbd0 00:06:02.354 11:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.354 11:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.354 1+0 records in 00:06:02.354 1+0 records out 00:06:02.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224167 s, 18.3 MB/s 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:02.354 11:18:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:02.354 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.354 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.354 11:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.611 /dev/nbd1 00:06:02.612 11:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.612 11:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.612 11:18:03 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:02.612 11:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:02.612 11:18:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.869 1+0 records in 00:06:02.869 1+0 records out 00:06:02.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245058 s, 16.7 MB/s 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:02.869 11:18:03 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:02.869 11:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.869 11:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.869 11:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.869 11:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.869 11:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.128 { 00:06:03.128 "nbd_device": "/dev/nbd0", 00:06:03.128 "bdev_name": "Malloc0" 00:06:03.128 }, 00:06:03.128 { 00:06:03.128 "nbd_device": "/dev/nbd1", 00:06:03.128 "bdev_name": "Malloc1" 00:06:03.128 } 00:06:03.128 ]' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.128 { 00:06:03.128 "nbd_device": "/dev/nbd0", 00:06:03.128 "bdev_name": "Malloc0" 00:06:03.128 }, 00:06:03.128 { 00:06:03.128 "nbd_device": "/dev/nbd1", 00:06:03.128 "bdev_name": "Malloc1" 00:06:03.128 } 00:06:03.128 ]' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.128 /dev/nbd1' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.128 /dev/nbd1' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.128 256+0 records in 00:06:03.128 256+0 records out 00:06:03.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382238 s, 274 MB/s 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.128 256+0 records in 00:06:03.128 256+0 records out 00:06:03.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202256 s, 51.8 MB/s 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.128 256+0 records in 00:06:03.128 256+0 records out 00:06:03.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241139 s, 43.5 MB/s 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.128 11:18:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.386 11:18:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.644 11:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.901 11:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.901 11:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.901 11:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.158 11:18:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.158 11:18:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.416 11:18:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.674 [2024-11-02 11:18:04.822642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.674 [2024-11-02 11:18:04.873597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.674 [2024-11-02 11:18:04.873601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.674 [2024-11-02 11:18:04.936513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.674 [2024-11-02 11:18:04.936609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.954 11:18:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.954 11:18:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.954 spdk_app_start Round 1 00:06:07.954 11:18:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3685980 /var/tmp/spdk-nbd.sock 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3685980 ']' 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.954 11:18:07 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:07.954 11:18:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.954 Malloc0 00:06:07.954 11:18:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.212 Malloc1 00:06:08.212 11:18:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.212 11:18:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.778 /dev/nbd0 00:06:08.778 11:18:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.778 11:18:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.778 1+0 records in 00:06:08.778 1+0 records out 00:06:08.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175379 s, 23.4 MB/s 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:08.778 11:18:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:08.778 11:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.778 11:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.778 11:18:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.036 /dev/nbd1 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.036 1+0 records in 00:06:09.036 1+0 records out 00:06:09.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023366 s, 17.5 MB/s 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.036 11:18:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.036 11:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.293 { 00:06:09.293 "nbd_device": "/dev/nbd0", 00:06:09.293 "bdev_name": "Malloc0" 00:06:09.293 }, 00:06:09.293 { 00:06:09.293 "nbd_device": "/dev/nbd1", 00:06:09.293 "bdev_name": "Malloc1" 00:06:09.293 } 00:06:09.293 ]' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.293 { 00:06:09.293 "nbd_device": "/dev/nbd0", 00:06:09.293 "bdev_name": "Malloc0" 00:06:09.293 }, 00:06:09.293 { 00:06:09.293 "nbd_device": "/dev/nbd1", 00:06:09.293 "bdev_name": "Malloc1" 00:06:09.293 } 00:06:09.293 ]' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.293 /dev/nbd1' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.293 /dev/nbd1' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.293 11:18:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.293 256+0 records in 00:06:09.293 256+0 records out 00:06:09.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050339 s, 208 MB/s 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.294 256+0 records in 00:06:09.294 256+0 records out 00:06:09.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229372 s, 45.7 MB/s 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.294 256+0 records in 00:06:09.294 256+0 records out 00:06:09.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215624 s, 48.6 MB/s 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.294 11:18:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.551 11:18:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.119 11:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.378 11:18:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.378 11:18:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.669 11:18:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.669 [2024-11-02 11:18:11.029992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.952 [2024-11-02 11:18:11.082645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.952 [2024-11-02 11:18:11.082645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.952 [2024-11-02 11:18:11.146085] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.952 [2024-11-02 11:18:11.146169] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.477 11:18:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.477 11:18:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.477 spdk_app_start Round 2 00:06:13.477 11:18:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3685980 /var/tmp/spdk-nbd.sock 00:06:13.477 11:18:13 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3685980 ']' 00:06:13.477 11:18:13 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.477 11:18:13 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.477 11:18:13 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.477 11:18:13 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.477 11:18:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.041 11:18:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.041 11:18:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:14.041 11:18:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.041 Malloc0 00:06:14.041 11:18:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.299 Malloc1 00:06:14.299 11:18:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.299 11:18:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.557 11:18:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.814 /dev/nbd0 00:06:14.814 11:18:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.814 11:18:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.814 1+0 records in 00:06:14.814 1+0 records out 00:06:14.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192469 s, 21.3 MB/s 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.814 11:18:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:14.814 11:18:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.814 11:18:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.814 11:18:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.072 /dev/nbd1 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.072 1+0 records in 00:06:15.072 1+0 records out 00:06:15.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201081 s, 20.4 MB/s 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.072 11:18:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.072 11:18:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.330 { 00:06:15.330 "nbd_device": "/dev/nbd0", 00:06:15.330 "bdev_name": "Malloc0" 00:06:15.330 }, 00:06:15.330 { 00:06:15.330 "nbd_device": "/dev/nbd1", 00:06:15.330 "bdev_name": "Malloc1" 00:06:15.330 } 00:06:15.330 ]' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.330 { 00:06:15.330 "nbd_device": "/dev/nbd0", 00:06:15.330 "bdev_name": "Malloc0" 00:06:15.330 }, 00:06:15.330 { 00:06:15.330 "nbd_device": "/dev/nbd1", 00:06:15.330 "bdev_name": "Malloc1" 00:06:15.330 } 00:06:15.330 ]' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.330 /dev/nbd1' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.330 /dev/nbd1' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.330 256+0 records in 00:06:15.330 256+0 records out 00:06:15.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518719 s, 202 MB/s 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.330 256+0 records in 00:06:15.330 256+0 records out 00:06:15.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188367 s, 55.7 MB/s 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.330 11:18:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.587 256+0 records in 00:06:15.587 256+0 records out 00:06:15.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245772 s, 42.7 MB/s 00:06:15.587 11:18:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.587 11:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.587 11:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.587 11:18:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.588 11:18:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.845 11:18:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.103 11:18:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.360 11:18:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.360 11:18:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.618 11:18:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.876 [2024-11-02 11:18:17.146345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.876 [2024-11-02 11:18:17.194097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.876 [2024-11-02 11:18:17.194102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.876 [2024-11-02 11:18:17.256284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.876 [2024-11-02 11:18:17.256375] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.154 11:18:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3685980 /var/tmp/spdk-nbd.sock 00:06:20.154 11:18:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3685980 ']' 00:06:20.154 11:18:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.154 11:18:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.154 11:18:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.154 11:18:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.154 11:18:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:20.154 11:18:20 event.app_repeat -- event/event.sh@39 -- # killprocess 3685980 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3685980 ']' 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3685980 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3685980 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3685980' 00:06:20.154 killing process with pid 3685980 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3685980 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3685980 00:06:20.154 spdk_app_start is called in Round 0. 00:06:20.154 Shutdown signal received, stop current app iteration 00:06:20.154 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 reinitialization... 00:06:20.154 spdk_app_start is called in Round 1. 00:06:20.154 Shutdown signal received, stop current app iteration 00:06:20.154 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 reinitialization... 00:06:20.154 spdk_app_start is called in Round 2. 00:06:20.154 Shutdown signal received, stop current app iteration 00:06:20.154 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 reinitialization... 00:06:20.154 spdk_app_start is called in Round 3. 00:06:20.154 Shutdown signal received, stop current app iteration 00:06:20.154 11:18:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:20.154 11:18:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:20.154 00:06:20.154 real 0m18.977s 00:06:20.154 user 0m42.140s 00:06:20.154 sys 0m3.226s 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.154 11:18:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.154 ************************************ 00:06:20.154 END TEST app_repeat 00:06:20.154 ************************************ 00:06:20.154 11:18:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.154 11:18:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.154 11:18:20 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.154 11:18:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.154 11:18:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.154 ************************************ 00:06:20.154 START TEST cpu_locks 00:06:20.154 ************************************ 00:06:20.154 11:18:20 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.413 * Looking for test storage... 00:06:20.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.413 11:18:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.413 --rc genhtml_branch_coverage=1 00:06:20.413 --rc genhtml_function_coverage=1 00:06:20.413 --rc genhtml_legend=1 00:06:20.413 --rc geninfo_all_blocks=1 00:06:20.413 --rc geninfo_unexecuted_blocks=1 00:06:20.413 00:06:20.413 ' 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.413 --rc genhtml_branch_coverage=1 00:06:20.413 --rc genhtml_function_coverage=1 00:06:20.413 --rc genhtml_legend=1 00:06:20.413 --rc geninfo_all_blocks=1 00:06:20.413 --rc geninfo_unexecuted_blocks=1 00:06:20.413 00:06:20.413 ' 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.413 --rc genhtml_branch_coverage=1 00:06:20.413 --rc genhtml_function_coverage=1 00:06:20.413 --rc genhtml_legend=1 00:06:20.413 --rc geninfo_all_blocks=1 00:06:20.413 --rc geninfo_unexecuted_blocks=1 00:06:20.413 00:06:20.413 ' 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.413 --rc genhtml_branch_coverage=1 00:06:20.413 --rc genhtml_function_coverage=1 00:06:20.413 --rc genhtml_legend=1 00:06:20.413 --rc geninfo_all_blocks=1 00:06:20.413 --rc geninfo_unexecuted_blocks=1 00:06:20.413 00:06:20.413 ' 00:06:20.413 11:18:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.413 11:18:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.413 11:18:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.413 11:18:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.413 11:18:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.413 ************************************ 00:06:20.413 START TEST default_locks 00:06:20.413 ************************************ 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3688994 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3688994 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3688994 ']' 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.413 11:18:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.413 [2024-11-02 11:18:20.737753] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:20.413 [2024-11-02 11:18:20.737835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688994 ] 00:06:20.413 [2024-11-02 11:18:20.812182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.672 [2024-11-02 11:18:20.863781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.929 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.929 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:20.929 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3688994 00:06:20.929 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3688994 00:06:20.929 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.188 lslocks: write error 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3688994 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3688994 ']' 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3688994 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3688994 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3688994' 00:06:21.188 killing process with pid 3688994 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3688994 00:06:21.188 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3688994 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3688994 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3688994 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3688994 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3688994 ']' 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.446 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3688994) - No such process 00:06:21.447 ERROR: process (pid: 3688994) is no longer running 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.447 00:06:21.447 real 0m1.151s 00:06:21.447 user 0m1.117s 00:06:21.447 sys 0m0.526s 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.447 11:18:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.447 ************************************ 00:06:21.447 END TEST default_locks 00:06:21.447 ************************************ 00:06:21.705 11:18:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.705 11:18:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.705 11:18:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.705 11:18:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.706 ************************************ 00:06:21.706 START TEST default_locks_via_rpc 00:06:21.706 ************************************ 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3689203 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3689203 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3689203 ']' 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.706 11:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.706 [2024-11-02 11:18:21.943418] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:21.706 [2024-11-02 11:18:21.943500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689203 ] 00:06:21.706 [2024-11-02 11:18:22.011203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.706 [2024-11-02 11:18:22.059497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3689203 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3689203 00:06:21.965 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3689203 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3689203 ']' 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3689203 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689203 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689203' 00:06:22.530 killing process with pid 3689203 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3689203 00:06:22.530 11:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3689203 00:06:22.788 00:06:22.788 real 0m1.184s 00:06:22.788 user 0m1.133s 00:06:22.788 sys 0m0.530s 00:06:22.788 11:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.788 11:18:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.788 ************************************ 00:06:22.788 END TEST default_locks_via_rpc 00:06:22.788 ************************************ 00:06:22.788 11:18:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:22.788 11:18:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.788 11:18:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.788 11:18:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.788 ************************************ 00:06:22.788 START TEST non_locking_app_on_locked_coremask 00:06:22.788 ************************************ 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3689438 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3689438 /var/tmp/spdk.sock 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3689438 ']' 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.788 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.789 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.789 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.789 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.789 [2024-11-02 11:18:23.171604] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:22.789 [2024-11-02 11:18:23.171691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689438 ] 00:06:23.047 [2024-11-02 11:18:23.237961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.047 [2024-11-02 11:18:23.287094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3689451 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3689451 /var/tmp/spdk2.sock 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3689451 ']' 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.305 11:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.305 [2024-11-02 11:18:23.609884] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:23.305 [2024-11-02 11:18:23.609958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689451 ] 00:06:23.564 [2024-11-02 11:18:23.725963] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.564 [2024-11-02 11:18:23.725997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.564 [2024-11-02 11:18:23.823437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.498 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.498 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:24.498 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3689438 00:06:24.498 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3689438 00:06:24.498 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.757 lslocks: write error 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3689438 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3689438 ']' 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3689438 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689438 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689438' 00:06:24.757 killing process with pid 3689438 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3689438 00:06:24.757 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3689438 00:06:25.322 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3689451 00:06:25.322 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3689451 ']' 00:06:25.322 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3689451 00:06:25.322 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:25.322 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.322 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689451 00:06:25.580 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:25.580 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:25.580 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689451' 00:06:25.580 killing process with pid 3689451 00:06:25.580 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3689451 00:06:25.580 11:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3689451 00:06:25.839 00:06:25.839 real 0m3.010s 00:06:25.839 user 0m3.197s 00:06:25.839 sys 0m1.022s 00:06:25.839 11:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.839 11:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.839 ************************************ 00:06:25.839 END TEST non_locking_app_on_locked_coremask 00:06:25.839 ************************************ 00:06:25.839 11:18:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.839 11:18:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.839 11:18:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.839 11:18:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.839 ************************************ 00:06:25.839 START TEST locking_app_on_unlocked_coremask 00:06:25.839 ************************************ 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3689758 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3689758 /var/tmp/spdk.sock 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3689758 ']' 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.839 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.839 [2024-11-02 11:18:26.233781] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:25.839 [2024-11-02 11:18:26.233867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689758 ] 00:06:26.098 [2024-11-02 11:18:26.300343] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.098 [2024-11-02 11:18:26.300389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.098 [2024-11-02 11:18:26.349555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3689881 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3689881 /var/tmp/spdk2.sock 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3689881 ']' 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.356 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.356 [2024-11-02 11:18:26.664379] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:26.356 [2024-11-02 11:18:26.664457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689881 ] 00:06:26.615 [2024-11-02 11:18:26.777001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.615 [2024-11-02 11:18:26.874218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.548 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.548 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:27.548 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3689881 00:06:27.548 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.548 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3689881 00:06:27.807 lslocks: write error 00:06:27.807 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3689758 00:06:27.807 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3689758 ']' 00:06:27.807 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3689758 00:06:27.807 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:27.807 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.807 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689758 00:06:27.807 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.807 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.807 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689758' 00:06:27.807 killing process with pid 3689758 00:06:27.807 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3689758 00:06:27.807 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3689758 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3689881 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3689881 ']' 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3689881 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689881 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689881' 00:06:28.742 killing process with pid 3689881 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3689881 00:06:28.742 11:18:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3689881 00:06:29.000 00:06:29.001 real 0m3.065s 00:06:29.001 user 0m3.284s 00:06:29.001 sys 0m1.013s 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.001 ************************************ 00:06:29.001 END TEST locking_app_on_unlocked_coremask 00:06:29.001 ************************************ 00:06:29.001 11:18:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:29.001 11:18:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.001 11:18:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.001 11:18:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.001 ************************************ 00:06:29.001 START TEST locking_app_on_locked_coremask 00:06:29.001 ************************************ 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3690192 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3690192 /var/tmp/spdk.sock 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3690192 ']' 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.001 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.001 [2024-11-02 11:18:29.350749] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:29.001 [2024-11-02 11:18:29.350836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690192 ] 00:06:29.259 [2024-11-02 11:18:29.416987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.259 [2024-11-02 11:18:29.465802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3690234 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3690234 /var/tmp/spdk2.sock 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3690234 /var/tmp/spdk2.sock 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3690234 /var/tmp/spdk2.sock 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3690234 ']' 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.517 11:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.517 [2024-11-02 11:18:29.789207] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:29.517 [2024-11-02 11:18:29.789322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690234 ] 00:06:29.517 [2024-11-02 11:18:29.897728] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3690192 has claimed it. 00:06:29.517 [2024-11-02 11:18:29.897791] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3690234) - No such process 00:06:30.451 ERROR: process (pid: 3690234) is no longer running 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.451 lslocks: write error 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3690192 ']' 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690192' 00:06:30.451 killing process with pid 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3690192 00:06:30.451 11:18:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3690192 00:06:31.018 00:06:31.018 real 0m1.952s 00:06:31.018 user 0m2.173s 00:06:31.018 sys 0m0.631s 00:06:31.018 11:18:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.018 11:18:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.018 ************************************ 00:06:31.018 END TEST locking_app_on_locked_coremask 00:06:31.018 ************************************ 00:06:31.018 11:18:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.018 11:18:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:31.018 11:18:31 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.018 11:18:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.018 ************************************ 00:06:31.018 START TEST locking_overlapped_coremask 00:06:31.018 ************************************ 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3690483 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3690483 /var/tmp/spdk.sock 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3690483 ']' 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.018 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.018 [2024-11-02 11:18:31.352808] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:31.018 [2024-11-02 11:18:31.352889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690483 ] 00:06:31.277 [2024-11-02 11:18:31.428751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.277 [2024-11-02 11:18:31.483388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.277 [2024-11-02 11:18:31.483420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.277 [2024-11-02 11:18:31.483423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3690495 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3690495 /var/tmp/spdk2.sock 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3690495 /var/tmp/spdk2.sock 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3690495 /var/tmp/spdk2.sock 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3690495 ']' 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.535 11:18:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.535 [2024-11-02 11:18:31.814939] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:31.535 [2024-11-02 11:18:31.815036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690495 ] 00:06:31.535 [2024-11-02 11:18:31.919061] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3690483 has claimed it. 00:06:31.535 [2024-11-02 11:18:31.919122] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3690495) - No such process 00:06:32.469 ERROR: process (pid: 3690495) is no longer running 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3690483 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3690483 ']' 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3690483 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690483 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690483' 00:06:32.469 killing process with pid 3690483 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3690483 00:06:32.469 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3690483 00:06:32.728 00:06:32.728 real 0m1.663s 00:06:32.728 user 0m4.703s 00:06:32.728 sys 0m0.465s 00:06:32.728 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.728 11:18:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.728 ************************************ 00:06:32.728 END TEST locking_overlapped_coremask 00:06:32.728 ************************************ 00:06:32.728 11:18:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.728 11:18:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.728 11:18:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.728 11:18:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.728 ************************************ 00:06:32.728 START TEST locking_overlapped_coremask_via_rpc 00:06:32.728 ************************************ 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3690657 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3690657 /var/tmp/spdk.sock 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3690657 ']' 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.728 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.728 [2024-11-02 11:18:33.069248] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:32.728 [2024-11-02 11:18:33.069372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690657 ] 00:06:32.987 [2024-11-02 11:18:33.141754] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.987 [2024-11-02 11:18:33.141794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.987 [2024-11-02 11:18:33.193189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.987 [2024-11-02 11:18:33.193265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.987 [2024-11-02 11:18:33.193269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.245 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.245 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:33.245 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3690788 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3690788 /var/tmp/spdk2.sock 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3690788 ']' 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.246 11:18:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.246 [2024-11-02 11:18:33.517972] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:33.246 [2024-11-02 11:18:33.518070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690788 ] 00:06:33.246 [2024-11-02 11:18:33.623993] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.246 [2024-11-02 11:18:33.624041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.504 [2024-11-02 11:18:33.721107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.504 [2024-11-02 11:18:33.721172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.504 [2024-11-02 11:18:33.721174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.438 [2024-11-02 11:18:34.503364] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3690657 has claimed it. 00:06:34.438 request: 00:06:34.438 { 00:06:34.438 "method": "framework_enable_cpumask_locks", 00:06:34.438 "req_id": 1 00:06:34.438 } 00:06:34.438 Got JSON-RPC error response 00:06:34.438 response: 00:06:34.438 { 00:06:34.438 "code": -32603, 00:06:34.438 "message": "Failed to claim CPU core: 2" 00:06:34.438 } 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3690657 /var/tmp/spdk.sock 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3690657 ']' 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3690788 /var/tmp/spdk2.sock 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3690788 ']' 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.438 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.439 11:18:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.696 00:06:34.696 real 0m2.048s 00:06:34.696 user 0m1.131s 00:06:34.696 sys 0m0.179s 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.696 11:18:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.696 ************************************ 00:06:34.696 END TEST locking_overlapped_coremask_via_rpc 00:06:34.696 ************************************ 00:06:34.696 11:18:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.696 11:18:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3690657 ]] 00:06:34.696 11:18:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3690657 00:06:34.696 11:18:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3690657 ']' 00:06:34.696 11:18:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3690657 00:06:34.696 11:18:35 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.696 11:18:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.696 11:18:35 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690657 00:06:34.955 11:18:35 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.955 11:18:35 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.955 11:18:35 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690657' 00:06:34.955 killing process with pid 3690657 00:06:34.955 11:18:35 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3690657 00:06:34.955 11:18:35 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3690657 00:06:35.213 11:18:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3690788 ]] 00:06:35.213 11:18:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3690788 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3690788 ']' 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3690788 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690788 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690788' 00:06:35.213 killing process with pid 3690788 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3690788 00:06:35.213 11:18:35 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3690788 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3690657 ]] 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3690657 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3690657 ']' 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3690657 00:06:35.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3690657) - No such process 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3690657 is not found' 00:06:35.780 Process with pid 3690657 is not found 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3690788 ]] 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3690788 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3690788 ']' 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3690788 00:06:35.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3690788) - No such process 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3690788 is not found' 00:06:35.780 Process with pid 3690788 is not found 00:06:35.780 11:18:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.780 00:06:35.780 real 0m15.448s 00:06:35.780 user 0m28.289s 00:06:35.780 sys 0m5.297s 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.780 11:18:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 END TEST cpu_locks 00:06:35.780 ************************************ 00:06:35.780 00:06:35.780 real 0m40.235s 00:06:35.780 user 1m19.484s 00:06:35.780 sys 0m9.358s 00:06:35.780 11:18:35 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.780 11:18:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 END TEST event 00:06:35.780 ************************************ 00:06:35.780 11:18:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.780 11:18:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.780 11:18:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.780 11:18:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.780 ************************************ 00:06:35.780 START TEST thread 00:06:35.780 ************************************ 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.780 * Looking for test storage... 00:06:35.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.780 11:18:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.780 11:18:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.780 11:18:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.780 11:18:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.780 11:18:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.780 11:18:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.780 11:18:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.780 11:18:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.780 11:18:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.780 11:18:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.780 11:18:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.780 11:18:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:35.780 11:18:36 thread -- scripts/common.sh@345 -- # : 1 00:06:35.780 11:18:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.780 11:18:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.780 11:18:36 thread -- scripts/common.sh@365 -- # decimal 1 00:06:35.780 11:18:36 thread -- scripts/common.sh@353 -- # local d=1 00:06:35.780 11:18:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.780 11:18:36 thread -- scripts/common.sh@355 -- # echo 1 00:06:35.780 11:18:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.780 11:18:36 thread -- scripts/common.sh@366 -- # decimal 2 00:06:35.780 11:18:36 thread -- scripts/common.sh@353 -- # local d=2 00:06:35.780 11:18:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.780 11:18:36 thread -- scripts/common.sh@355 -- # echo 2 00:06:35.780 11:18:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.780 11:18:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.780 11:18:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.780 11:18:36 thread -- scripts/common.sh@368 -- # return 0 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.780 --rc genhtml_branch_coverage=1 00:06:35.780 --rc genhtml_function_coverage=1 00:06:35.780 --rc genhtml_legend=1 00:06:35.780 --rc geninfo_all_blocks=1 00:06:35.780 --rc geninfo_unexecuted_blocks=1 00:06:35.780 00:06:35.780 ' 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.780 --rc genhtml_branch_coverage=1 00:06:35.780 --rc genhtml_function_coverage=1 00:06:35.780 --rc genhtml_legend=1 00:06:35.780 --rc geninfo_all_blocks=1 00:06:35.780 --rc geninfo_unexecuted_blocks=1 00:06:35.780 00:06:35.780 ' 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.780 --rc genhtml_branch_coverage=1 00:06:35.780 --rc genhtml_function_coverage=1 00:06:35.780 --rc genhtml_legend=1 00:06:35.780 --rc geninfo_all_blocks=1 00:06:35.780 --rc geninfo_unexecuted_blocks=1 00:06:35.780 00:06:35.780 ' 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.780 --rc genhtml_branch_coverage=1 00:06:35.780 --rc genhtml_function_coverage=1 00:06:35.780 --rc genhtml_legend=1 00:06:35.780 --rc geninfo_all_blocks=1 00:06:35.780 --rc geninfo_unexecuted_blocks=1 00:06:35.780 00:06:35.780 ' 00:06:35.780 11:18:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.780 11:18:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.039 ************************************ 00:06:36.039 START TEST thread_poller_perf 00:06:36.039 ************************************ 00:06:36.039 11:18:36 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.039 [2024-11-02 11:18:36.215824] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:36.039 [2024-11-02 11:18:36.215891] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691164 ] 00:06:36.039 [2024-11-02 11:18:36.280337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.039 [2024-11-02 11:18:36.327535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.039 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.468 [2024-11-02T10:18:37.870Z] ====================================== 00:06:37.468 [2024-11-02T10:18:37.870Z] busy:2712269460 (cyc) 00:06:37.468 [2024-11-02T10:18:37.870Z] total_run_count: 295000 00:06:37.468 [2024-11-02T10:18:37.870Z] tsc_hz: 2700000000 (cyc) 00:06:37.468 [2024-11-02T10:18:37.870Z] ====================================== 00:06:37.468 [2024-11-02T10:18:37.870Z] poller_cost: 9194 (cyc), 3405 (nsec) 00:06:37.468 00:06:37.468 real 0m1.184s 00:06:37.468 user 0m1.106s 00:06:37.468 sys 0m0.072s 00:06:37.468 11:18:37 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.468 11:18:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.468 ************************************ 00:06:37.468 END TEST thread_poller_perf 00:06:37.468 ************************************ 00:06:37.468 11:18:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.468 11:18:37 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:37.468 11:18:37 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.468 11:18:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.468 ************************************ 00:06:37.468 START TEST thread_poller_perf 00:06:37.468 ************************************ 00:06:37.468 11:18:37 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.468 [2024-11-02 11:18:37.453880] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:37.468 [2024-11-02 11:18:37.453949] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691327 ] 00:06:37.468 [2024-11-02 11:18:37.528330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.468 [2024-11-02 11:18:37.575897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.468 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.402 [2024-11-02T10:18:38.804Z] ====================================== 00:06:38.402 [2024-11-02T10:18:38.804Z] busy:2702700899 (cyc) 00:06:38.402 [2024-11-02T10:18:38.804Z] total_run_count: 3852000 00:06:38.402 [2024-11-02T10:18:38.804Z] tsc_hz: 2700000000 (cyc) 00:06:38.402 [2024-11-02T10:18:38.804Z] ====================================== 00:06:38.402 [2024-11-02T10:18:38.804Z] poller_cost: 701 (cyc), 259 (nsec) 00:06:38.402 00:06:38.402 real 0m1.187s 00:06:38.402 user 0m1.118s 00:06:38.402 sys 0m0.062s 00:06:38.402 11:18:38 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.402 11:18:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.402 ************************************ 00:06:38.402 END TEST thread_poller_perf 00:06:38.402 ************************************ 00:06:38.402 11:18:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:38.402 00:06:38.402 real 0m2.615s 00:06:38.402 user 0m2.358s 00:06:38.402 sys 0m0.260s 00:06:38.402 11:18:38 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.402 11:18:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.402 ************************************ 00:06:38.402 END TEST thread 00:06:38.402 ************************************ 00:06:38.402 11:18:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:38.402 11:18:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.402 11:18:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.402 11:18:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.402 11:18:38 -- common/autotest_common.sh@10 -- # set +x 00:06:38.402 ************************************ 00:06:38.402 START TEST app_cmdline 00:06:38.402 ************************************ 00:06:38.402 11:18:38 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.402 * Looking for test storage... 00:06:38.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.402 11:18:38 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.402 11:18:38 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.402 11:18:38 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.661 11:18:38 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.661 11:18:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.661 11:18:38 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.661 11:18:38 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.661 --rc genhtml_branch_coverage=1 00:06:38.661 --rc genhtml_function_coverage=1 00:06:38.661 --rc genhtml_legend=1 00:06:38.661 --rc geninfo_all_blocks=1 00:06:38.662 --rc geninfo_unexecuted_blocks=1 00:06:38.662 00:06:38.662 ' 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.662 --rc genhtml_branch_coverage=1 00:06:38.662 --rc genhtml_function_coverage=1 00:06:38.662 --rc genhtml_legend=1 00:06:38.662 --rc geninfo_all_blocks=1 00:06:38.662 --rc geninfo_unexecuted_blocks=1 00:06:38.662 00:06:38.662 ' 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.662 --rc genhtml_branch_coverage=1 00:06:38.662 --rc genhtml_function_coverage=1 00:06:38.662 --rc genhtml_legend=1 00:06:38.662 --rc geninfo_all_blocks=1 00:06:38.662 --rc geninfo_unexecuted_blocks=1 00:06:38.662 00:06:38.662 ' 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.662 --rc genhtml_branch_coverage=1 00:06:38.662 --rc genhtml_function_coverage=1 00:06:38.662 --rc genhtml_legend=1 00:06:38.662 --rc geninfo_all_blocks=1 00:06:38.662 --rc geninfo_unexecuted_blocks=1 00:06:38.662 00:06:38.662 ' 00:06:38.662 11:18:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.662 11:18:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3691646 00:06:38.662 11:18:38 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.662 11:18:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3691646 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3691646 ']' 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.662 11:18:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.662 [2024-11-02 11:18:38.877333] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:38.662 [2024-11-02 11:18:38.877419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691646 ] 00:06:38.662 [2024-11-02 11:18:38.954837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.662 [2024-11-02 11:18:39.004254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.920 11:18:39 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.920 11:18:39 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:38.920 11:18:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:39.178 { 00:06:39.178 "version": "SPDK v25.01-pre git sha1 fa3ab7384", 00:06:39.178 "fields": { 00:06:39.178 "major": 25, 00:06:39.178 "minor": 1, 00:06:39.178 "patch": 0, 00:06:39.178 "suffix": "-pre", 00:06:39.178 "commit": "fa3ab7384" 00:06:39.178 } 00:06:39.178 } 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.178 11:18:39 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.178 11:18:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.178 11:18:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:39.178 11:18:39 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.436 11:18:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.436 11:18:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.436 11:18:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:39.436 11:18:39 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.693 request: 00:06:39.693 { 00:06:39.693 "method": "env_dpdk_get_mem_stats", 00:06:39.693 "req_id": 1 00:06:39.693 } 00:06:39.693 Got JSON-RPC error response 00:06:39.693 response: 00:06:39.693 { 00:06:39.693 "code": -32601, 00:06:39.693 "message": "Method not found" 00:06:39.693 } 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.693 11:18:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3691646 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3691646 ']' 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3691646 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3691646 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3691646' 00:06:39.693 killing process with pid 3691646 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@971 -- # kill 3691646 00:06:39.693 11:18:39 app_cmdline -- common/autotest_common.sh@976 -- # wait 3691646 00:06:39.952 00:06:39.952 real 0m1.617s 00:06:39.952 user 0m2.021s 00:06:39.952 sys 0m0.474s 00:06:39.952 11:18:40 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.952 11:18:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 ************************************ 00:06:39.952 END TEST app_cmdline 00:06:39.952 ************************************ 00:06:39.952 11:18:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:39.952 11:18:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.952 11:18:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.952 11:18:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.211 ************************************ 00:06:40.211 START TEST version 00:06:40.211 ************************************ 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:40.211 * Looking for test storage... 00:06:40.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.211 11:18:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.211 11:18:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.211 11:18:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.211 11:18:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.211 11:18:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.211 11:18:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.211 11:18:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.211 11:18:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.211 11:18:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.211 11:18:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.211 11:18:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.211 11:18:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:40.211 11:18:40 version -- scripts/common.sh@345 -- # : 1 00:06:40.211 11:18:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.211 11:18:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.211 11:18:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:40.211 11:18:40 version -- scripts/common.sh@353 -- # local d=1 00:06:40.211 11:18:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.211 11:18:40 version -- scripts/common.sh@355 -- # echo 1 00:06:40.211 11:18:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.211 11:18:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:40.211 11:18:40 version -- scripts/common.sh@353 -- # local d=2 00:06:40.211 11:18:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.211 11:18:40 version -- scripts/common.sh@355 -- # echo 2 00:06:40.211 11:18:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.211 11:18:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.211 11:18:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.211 11:18:40 version -- scripts/common.sh@368 -- # return 0 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.211 --rc genhtml_branch_coverage=1 00:06:40.211 --rc genhtml_function_coverage=1 00:06:40.211 --rc genhtml_legend=1 00:06:40.211 --rc geninfo_all_blocks=1 00:06:40.211 --rc geninfo_unexecuted_blocks=1 00:06:40.211 00:06:40.211 ' 00:06:40.211 11:18:40 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.211 --rc genhtml_branch_coverage=1 00:06:40.212 --rc genhtml_function_coverage=1 00:06:40.212 --rc genhtml_legend=1 00:06:40.212 --rc geninfo_all_blocks=1 00:06:40.212 --rc geninfo_unexecuted_blocks=1 00:06:40.212 00:06:40.212 ' 00:06:40.212 11:18:40 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.212 --rc genhtml_branch_coverage=1 00:06:40.212 --rc genhtml_function_coverage=1 00:06:40.212 --rc genhtml_legend=1 00:06:40.212 --rc geninfo_all_blocks=1 00:06:40.212 --rc geninfo_unexecuted_blocks=1 00:06:40.212 00:06:40.212 ' 00:06:40.212 11:18:40 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.212 --rc genhtml_branch_coverage=1 00:06:40.212 --rc genhtml_function_coverage=1 00:06:40.212 --rc genhtml_legend=1 00:06:40.212 --rc geninfo_all_blocks=1 00:06:40.212 --rc geninfo_unexecuted_blocks=1 00:06:40.212 00:06:40.212 ' 00:06:40.212 11:18:40 version -- app/version.sh@17 -- # get_header_version major 00:06:40.212 11:18:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # cut -f2 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.212 11:18:40 version -- app/version.sh@17 -- # major=25 00:06:40.212 11:18:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:40.212 11:18:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # cut -f2 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.212 11:18:40 version -- app/version.sh@18 -- # minor=1 00:06:40.212 11:18:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:40.212 11:18:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # cut -f2 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.212 11:18:40 version -- app/version.sh@19 -- # patch=0 00:06:40.212 11:18:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:40.212 11:18:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # cut -f2 00:06:40.212 11:18:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.212 11:18:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:40.212 11:18:40 version -- app/version.sh@22 -- # version=25.1 00:06:40.212 11:18:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:40.212 11:18:40 version -- app/version.sh@28 -- # version=25.1rc0 00:06:40.212 11:18:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:40.212 11:18:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.212 11:18:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:40.212 11:18:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:40.212 00:06:40.212 real 0m0.206s 00:06:40.212 user 0m0.146s 00:06:40.212 sys 0m0.085s 00:06:40.212 11:18:40 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.212 11:18:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:40.212 ************************************ 00:06:40.212 END TEST version 00:06:40.212 ************************************ 00:06:40.212 11:18:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:40.212 11:18:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:40.212 11:18:40 -- spdk/autotest.sh@194 -- # uname -s 00:06:40.212 11:18:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:40.212 11:18:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:40.212 11:18:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:40.212 11:18:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:40.212 11:18:40 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:40.212 11:18:40 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:40.212 11:18:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.212 11:18:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.471 11:18:40 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:40.471 11:18:40 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:40.471 11:18:40 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:40.471 11:18:40 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:40.471 11:18:40 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:40.471 11:18:40 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:40.471 11:18:40 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:40.471 11:18:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:40.471 11:18:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.471 11:18:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.471 ************************************ 00:06:40.471 START TEST nvmf_tcp 00:06:40.471 ************************************ 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:40.471 * Looking for test storage... 00:06:40.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.471 11:18:40 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.471 --rc genhtml_branch_coverage=1 00:06:40.471 --rc genhtml_function_coverage=1 00:06:40.471 --rc genhtml_legend=1 00:06:40.471 --rc geninfo_all_blocks=1 00:06:40.471 --rc geninfo_unexecuted_blocks=1 00:06:40.471 00:06:40.471 ' 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.471 --rc genhtml_branch_coverage=1 00:06:40.471 --rc genhtml_function_coverage=1 00:06:40.471 --rc genhtml_legend=1 00:06:40.471 --rc geninfo_all_blocks=1 00:06:40.471 --rc geninfo_unexecuted_blocks=1 00:06:40.471 00:06:40.471 ' 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.471 --rc genhtml_branch_coverage=1 00:06:40.471 --rc genhtml_function_coverage=1 00:06:40.471 --rc genhtml_legend=1 00:06:40.471 --rc geninfo_all_blocks=1 00:06:40.471 --rc geninfo_unexecuted_blocks=1 00:06:40.471 00:06:40.471 ' 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.471 --rc genhtml_branch_coverage=1 00:06:40.471 --rc genhtml_function_coverage=1 00:06:40.471 --rc genhtml_legend=1 00:06:40.471 --rc geninfo_all_blocks=1 00:06:40.471 --rc geninfo_unexecuted_blocks=1 00:06:40.471 00:06:40.471 ' 00:06:40.471 11:18:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:40.471 11:18:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:40.471 11:18:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.471 11:18:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.471 ************************************ 00:06:40.471 START TEST nvmf_target_core 00:06:40.471 ************************************ 00:06:40.471 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:40.471 * Looking for test storage... 00:06:40.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:40.471 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.471 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.471 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.730 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.731 --rc genhtml_branch_coverage=1 00:06:40.731 --rc genhtml_function_coverage=1 00:06:40.731 --rc genhtml_legend=1 00:06:40.731 --rc geninfo_all_blocks=1 00:06:40.731 --rc geninfo_unexecuted_blocks=1 00:06:40.731 00:06:40.731 ' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.731 --rc genhtml_branch_coverage=1 00:06:40.731 --rc genhtml_function_coverage=1 00:06:40.731 --rc genhtml_legend=1 00:06:40.731 --rc geninfo_all_blocks=1 00:06:40.731 --rc geninfo_unexecuted_blocks=1 00:06:40.731 00:06:40.731 ' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.731 --rc genhtml_branch_coverage=1 00:06:40.731 --rc genhtml_function_coverage=1 00:06:40.731 --rc genhtml_legend=1 00:06:40.731 --rc geninfo_all_blocks=1 00:06:40.731 --rc geninfo_unexecuted_blocks=1 00:06:40.731 00:06:40.731 ' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.731 --rc genhtml_branch_coverage=1 00:06:40.731 --rc genhtml_function_coverage=1 00:06:40.731 --rc genhtml_legend=1 00:06:40.731 --rc geninfo_all_blocks=1 00:06:40.731 --rc geninfo_unexecuted_blocks=1 00:06:40.731 00:06:40.731 ' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.731 ************************************ 00:06:40.731 START TEST nvmf_abort 00:06:40.731 ************************************ 00:06:40.731 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:40.731 * Looking for test storage... 00:06:40.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.731 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.991 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.992 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:42.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.896 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:42.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:42.897 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:42.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.897 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.156 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:06:43.156 00:06:43.156 --- 10.0.0.2 ping statistics --- 00:06:43.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.156 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:06:43.157 00:06:43.157 --- 10.0.0.1 ping statistics --- 00:06:43.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.157 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3693730 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3693730 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3693730 ']' 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.157 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.157 [2024-11-02 11:18:43.458735] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:43.157 [2024-11-02 11:18:43.458818] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.157 [2024-11-02 11:18:43.531963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.415 [2024-11-02 11:18:43.583459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.415 [2024-11-02 11:18:43.583521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.415 [2024-11-02 11:18:43.583536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.415 [2024-11-02 11:18:43.583547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.415 [2024-11-02 11:18:43.583573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.415 [2024-11-02 11:18:43.585157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.415 [2024-11-02 11:18:43.585220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.415 [2024-11-02 11:18:43.585222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.415 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.415 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:06:43.415 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.415 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:43.415 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 [2024-11-02 11:18:43.738908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 Malloc0 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 Delay0 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.416 [2024-11-02 11:18:43.812505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.416 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.674 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.674 11:18:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:43.674 [2024-11-02 11:18:43.969409] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:46.205 Initializing NVMe Controllers 00:06:46.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:46.205 controller IO queue size 128 less than required 00:06:46.205 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:46.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:46.205 Initialization complete. Launching workers. 00:06:46.205 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28607 00:06:46.205 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28668, failed to submit 62 00:06:46.205 success 28611, unsuccessful 57, failed 0 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.205 rmmod nvme_tcp 00:06:46.205 rmmod nvme_fabrics 00:06:46.205 rmmod nvme_keyring 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3693730 ']' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3693730 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3693730 ']' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3693730 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3693730 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3693730' 00:06:46.205 killing process with pid 3693730 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3693730 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3693730 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.205 11:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.109 00:06:48.109 real 0m7.432s 00:06:48.109 user 0m10.904s 00:06:48.109 sys 0m2.538s 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.109 ************************************ 00:06:48.109 END TEST nvmf_abort 00:06:48.109 ************************************ 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.109 11:18:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.109 ************************************ 00:06:48.109 START TEST nvmf_ns_hotplug_stress 00:06:48.109 ************************************ 00:06:48.110 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.368 * Looking for test storage... 00:06:48.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.368 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.368 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.368 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.368 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.368 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.369 --rc genhtml_branch_coverage=1 00:06:48.369 --rc genhtml_function_coverage=1 00:06:48.369 --rc genhtml_legend=1 00:06:48.369 --rc geninfo_all_blocks=1 00:06:48.369 --rc geninfo_unexecuted_blocks=1 00:06:48.369 00:06:48.369 ' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.369 --rc genhtml_branch_coverage=1 00:06:48.369 --rc genhtml_function_coverage=1 00:06:48.369 --rc genhtml_legend=1 00:06:48.369 --rc geninfo_all_blocks=1 00:06:48.369 --rc geninfo_unexecuted_blocks=1 00:06:48.369 00:06:48.369 ' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.369 --rc genhtml_branch_coverage=1 00:06:48.369 --rc genhtml_function_coverage=1 00:06:48.369 --rc genhtml_legend=1 00:06:48.369 --rc geninfo_all_blocks=1 00:06:48.369 --rc geninfo_unexecuted_blocks=1 00:06:48.369 00:06:48.369 ' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.369 --rc genhtml_branch_coverage=1 00:06:48.369 --rc genhtml_function_coverage=1 00:06:48.369 --rc genhtml_legend=1 00:06:48.369 --rc geninfo_all_blocks=1 00:06:48.369 --rc geninfo_unexecuted_blocks=1 00:06:48.369 00:06:48.369 ' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.369 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.370 11:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:50.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:50.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:50.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:50.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:06:50.902 00:06:50.902 --- 10.0.0.2 ping statistics --- 00:06:50.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.902 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:06:50.902 00:06:50.902 --- 10.0.0.1 ping statistics --- 00:06:50.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.902 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3695968 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3695968 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3695968 ']' 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.902 11:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 [2024-11-02 11:18:50.914854] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:06:50.902 [2024-11-02 11:18:50.914942] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.902 [2024-11-02 11:18:50.989390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.902 [2024-11-02 11:18:51.037650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.902 [2024-11-02 11:18:51.037705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.902 [2024-11-02 11:18:51.037729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.902 [2024-11-02 11:18:51.037740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.902 [2024-11-02 11:18:51.037749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.902 [2024-11-02 11:18:51.039340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.902 [2024-11-02 11:18:51.039393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.902 [2024-11-02 11:18:51.039397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.902 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.902 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:50.902 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.903 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.903 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.903 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.903 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:50.903 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.160 [2024-11-02 11:18:51.423827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.160 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.418 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.675 [2024-11-02 11:18:51.954644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.675 11:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.933 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:52.192 Malloc0 00:06:52.192 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.450 Delay0 00:06:52.450 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.707 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:52.965 NULL1 00:06:52.965 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:53.223 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3696393 00:06:53.223 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:53.223 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:06:53.223 11:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.595 Read completed with error (sct=0, sc=11) 00:06:54.595 11:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.853 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:54.853 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:54.853 true 00:06:55.111 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:06:55.111 11:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.675 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.933 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:55.933 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:56.190 true 00:06:56.190 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:06:56.190 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.448 11:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.706 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:56.706 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:56.963 true 00:06:56.963 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:06:56.963 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.221 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.786 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:57.786 11:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:57.786 true 00:06:57.786 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:06:57.786 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.719 11:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.977 11:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:58.977 11:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:59.235 true 00:06:59.235 11:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:06:59.235 11:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.492 11:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.057 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:00.057 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:00.057 true 00:07:00.315 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:00.315 11:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.248 11:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.248 11:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:01.248 11:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:01.506 true 00:07:01.506 11:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:01.506 11:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.764 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.021 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:02.021 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:02.279 true 00:07:02.279 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:02.279 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.537 11:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.795 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:02.795 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:03.052 true 00:07:03.052 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:03.052 11:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.450 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.451 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:04.451 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:04.732 true 00:07:04.732 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:04.732 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.990 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.247 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:05.247 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:05.505 true 00:07:05.505 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:05.505 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.762 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.019 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:06.019 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:06.277 true 00:07:06.277 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:06.277 11:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.210 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.468 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:07.468 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:07.726 true 00:07:07.726 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:07.726 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.983 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.240 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:08.240 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:08.498 true 00:07:08.498 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:08.498 11:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.755 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.013 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:09.013 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:09.271 true 00:07:09.271 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:09.271 11:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.203 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.460 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:10.461 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:10.718 true 00:07:10.718 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:10.718 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.976 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.234 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:11.234 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:11.492 true 00:07:11.492 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:11.492 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.749 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.007 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:12.007 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:12.265 true 00:07:12.265 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:12.265 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.197 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.713 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:13.713 11:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:13.970 true 00:07:13.971 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:13.971 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.228 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.486 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:14.486 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:14.743 true 00:07:14.743 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:14.743 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.676 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.933 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:15.933 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:15.933 true 00:07:16.191 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:16.191 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.448 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.706 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:16.706 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:16.964 true 00:07:16.964 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:16.964 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.895 11:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.895 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:17.895 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:18.151 true 00:07:18.151 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:18.151 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.409 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.973 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:18.973 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:18.973 true 00:07:18.973 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:18.973 11:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.905 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.163 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:20.163 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:20.421 true 00:07:20.421 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:20.421 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.678 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.936 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:20.936 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:21.194 true 00:07:21.194 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:21.194 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.127 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.127 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:22.127 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:22.692 true 00:07:22.692 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:22.692 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.692 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.950 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:22.950 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:23.208 true 00:07:23.208 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:23.208 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.465 Initializing NVMe Controllers 00:07:23.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.466 Controller IO queue size 128, less than required. 00:07:23.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.466 Controller IO queue size 128, less than required. 00:07:23.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:23.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:23.466 Initialization complete. Launching workers. 00:07:23.466 ======================================================== 00:07:23.466 Latency(us) 00:07:23.466 Device Information : IOPS MiB/s Average min max 00:07:23.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 721.68 0.35 80414.42 2684.15 1024177.65 00:07:23.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9524.91 4.65 13439.97 1837.23 366998.66 00:07:23.466 ======================================================== 00:07:23.466 Total : 10246.59 5.00 18157.04 1837.23 1024177.65 00:07:23.466 00:07:23.723 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.981 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:23.981 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:24.239 true 00:07:24.239 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3696393 00:07:24.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3696393) - No such process 00:07:24.239 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3696393 00:07:24.239 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.497 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.754 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:24.754 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:24.754 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:24.754 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.754 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:25.012 null0 00:07:25.012 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.012 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.012 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:25.270 null1 00:07:25.270 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.270 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.270 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:25.528 null2 00:07:25.528 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.528 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.528 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:25.786 null3 00:07:25.786 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.786 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.786 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:26.043 null4 00:07:26.043 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.043 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.043 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:26.301 null5 00:07:26.301 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.301 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.301 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:26.559 null6 00:07:26.559 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.559 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.559 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:26.818 null7 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.818 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3700456 3700457 3700459 3700461 3700463 3700465 3700467 3700469 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.819 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.077 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.336 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.594 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.594 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.594 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.852 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.853 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.156 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.157 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.438 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.438 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.438 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.438 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.438 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.438 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.439 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.439 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.697 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.955 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.214 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.471 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.472 11:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.730 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.988 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.247 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.506 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.764 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.022 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.023 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.023 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.023 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.023 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.280 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.281 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.539 11:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.106 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.364 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.623 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.881 rmmod nvme_tcp 00:07:32.881 rmmod nvme_fabrics 00:07:32.881 rmmod nvme_keyring 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3695968 ']' 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3695968 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3695968 ']' 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3695968 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3695968 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:32.881 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3695968' 00:07:32.882 killing process with pid 3695968 00:07:32.882 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3695968 00:07:32.882 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3695968 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.140 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.676 00:07:35.676 real 0m47.033s 00:07:35.676 user 3m39.274s 00:07:35.676 sys 0m15.913s 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.676 ************************************ 00:07:35.676 END TEST nvmf_ns_hotplug_stress 00:07:35.676 ************************************ 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.676 ************************************ 00:07:35.676 START TEST nvmf_delete_subsystem 00:07:35.676 ************************************ 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.676 * Looking for test storage... 00:07:35.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.676 --rc genhtml_branch_coverage=1 00:07:35.676 --rc genhtml_function_coverage=1 00:07:35.676 --rc genhtml_legend=1 00:07:35.676 --rc geninfo_all_blocks=1 00:07:35.676 --rc geninfo_unexecuted_blocks=1 00:07:35.676 00:07:35.676 ' 00:07:35.676 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:35.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.677 --rc genhtml_branch_coverage=1 00:07:35.677 --rc genhtml_function_coverage=1 00:07:35.677 --rc genhtml_legend=1 00:07:35.677 --rc geninfo_all_blocks=1 00:07:35.677 --rc geninfo_unexecuted_blocks=1 00:07:35.677 00:07:35.677 ' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:35.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.677 --rc genhtml_branch_coverage=1 00:07:35.677 --rc genhtml_function_coverage=1 00:07:35.677 --rc genhtml_legend=1 00:07:35.677 --rc geninfo_all_blocks=1 00:07:35.677 --rc geninfo_unexecuted_blocks=1 00:07:35.677 00:07:35.677 ' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:35.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.677 --rc genhtml_branch_coverage=1 00:07:35.677 --rc genhtml_function_coverage=1 00:07:35.677 --rc genhtml_legend=1 00:07:35.677 --rc geninfo_all_blocks=1 00:07:35.677 --rc geninfo_unexecuted_blocks=1 00:07:35.677 00:07:35.677 ' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.677 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.582 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.582 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.582 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.582 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.582 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:07:37.582 00:07:37.582 --- 10.0.0.2 ping statistics --- 00:07:37.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.583 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:07:37.583 00:07:37.583 --- 10.0.0.1 ping statistics --- 00:07:37.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.583 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3703261 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3703261 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3703261 ']' 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.583 11:19:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.583 [2024-11-02 11:19:37.954986] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:07:37.583 [2024-11-02 11:19:37.955060] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.845 [2024-11-02 11:19:38.034753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.845 [2024-11-02 11:19:38.082180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.845 [2024-11-02 11:19:38.082247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.845 [2024-11-02 11:19:38.082268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.845 [2024-11-02 11:19:38.082280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.845 [2024-11-02 11:19:38.082289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.845 [2024-11-02 11:19:38.083737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.845 [2024-11-02 11:19:38.083742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.845 [2024-11-02 11:19:38.233429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.845 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.102 [2024-11-02 11:19:38.249789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.102 NULL1 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.102 Delay0 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3703390 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:38.102 11:19:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:38.102 [2024-11-02 11:19:38.334465] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:40.006 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.006 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.006 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 [2024-11-02 11:19:40.546185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92bc00d470 is same with the state(6) to be set 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Write completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 starting I/O failed: -6 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.264 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 starting I/O failed: -6 00:07:40.265 [2024-11-02 11:19:40.546970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69150 is same with the state(6) to be set 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Write completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:40.265 Read completed with error (sct=0, sc=8) 00:07:41.199 [2024-11-02 11:19:41.512579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67190 is same with the state(6) to be set 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 [2024-11-02 11:19:41.548283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68f70 is same with the state(6) to be set 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 [2024-11-02 11:19:41.548448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69330 is same with the state(6) to be set 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 [2024-11-02 11:19:41.548897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92bc00cfe0 is same with the state(6) to be set 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Write completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 Read completed with error (sct=0, sc=8) 00:07:41.199 [2024-11-02 11:19:41.549101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92bc00d7a0 is same with the state(6) to be set 00:07:41.199 Initializing NVMe Controllers 00:07:41.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.199 Controller IO queue size 128, less than required. 00:07:41.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:41.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:41.199 Initialization complete. Launching workers. 00:07:41.199 ======================================================== 00:07:41.199 Latency(us) 00:07:41.199 Device Information : IOPS MiB/s Average min max 00:07:41.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.26 0.08 914362.82 464.12 1012997.73 00:07:41.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.15 0.09 881568.84 705.82 1012133.07 00:07:41.199 ======================================================== 00:07:41.199 Total : 338.40 0.17 897292.64 464.12 1012997.73 00:07:41.199 00:07:41.199 [2024-11-02 11:19:41.549993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a67190 (9): Bad file descriptor 00:07:41.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:41.199 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.199 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:41.199 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3703390 00:07:41.199 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3703390 00:07:41.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3703390) - No such process 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3703390 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3703390 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3703390 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 [2024-11-02 11:19:42.074837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3703794 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:41.765 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.765 [2024-11-02 11:19:42.137959] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:42.331 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.331 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:42.331 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.896 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.896 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:42.896 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.462 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.462 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:43.462 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.719 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.719 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:43.719 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.284 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.284 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:44.284 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.850 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.850 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:44.850 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.107 Initializing NVMe Controllers 00:07:45.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.107 Controller IO queue size 128, less than required. 00:07:45.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:45.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:45.107 Initialization complete. Launching workers. 00:07:45.107 ======================================================== 00:07:45.107 Latency(us) 00:07:45.107 Device Information : IOPS MiB/s Average min max 00:07:45.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003329.98 1000193.58 1041445.94 00:07:45.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005522.42 1000189.54 1011580.97 00:07:45.107 ======================================================== 00:07:45.107 Total : 256.00 0.12 1004426.20 1000189.54 1041445.94 00:07:45.107 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3703794 00:07:45.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3703794) - No such process 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3703794 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:45.366 rmmod nvme_tcp 00:07:45.366 rmmod nvme_fabrics 00:07:45.366 rmmod nvme_keyring 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3703261 ']' 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3703261 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3703261 ']' 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3703261 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3703261 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3703261' 00:07:45.366 killing process with pid 3703261 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3703261 00:07:45.366 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3703261 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.625 11:19:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:48.162 00:07:48.162 real 0m12.396s 00:07:48.162 user 0m28.104s 00:07:48.162 sys 0m2.899s 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.162 ************************************ 00:07:48.162 END TEST nvmf_delete_subsystem 00:07:48.162 ************************************ 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.162 11:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.162 ************************************ 00:07:48.162 START TEST nvmf_host_management 00:07:48.162 ************************************ 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.162 * Looking for test storage... 00:07:48.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.162 --rc genhtml_branch_coverage=1 00:07:48.162 --rc genhtml_function_coverage=1 00:07:48.162 --rc genhtml_legend=1 00:07:48.162 --rc geninfo_all_blocks=1 00:07:48.162 --rc geninfo_unexecuted_blocks=1 00:07:48.162 00:07:48.162 ' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.162 --rc genhtml_branch_coverage=1 00:07:48.162 --rc genhtml_function_coverage=1 00:07:48.162 --rc genhtml_legend=1 00:07:48.162 --rc geninfo_all_blocks=1 00:07:48.162 --rc geninfo_unexecuted_blocks=1 00:07:48.162 00:07:48.162 ' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.162 --rc genhtml_branch_coverage=1 00:07:48.162 --rc genhtml_function_coverage=1 00:07:48.162 --rc genhtml_legend=1 00:07:48.162 --rc geninfo_all_blocks=1 00:07:48.162 --rc geninfo_unexecuted_blocks=1 00:07:48.162 00:07:48.162 ' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.162 --rc genhtml_branch_coverage=1 00:07:48.162 --rc genhtml_function_coverage=1 00:07:48.162 --rc genhtml_legend=1 00:07:48.162 --rc geninfo_all_blocks=1 00:07:48.162 --rc geninfo_unexecuted_blocks=1 00:07:48.162 00:07:48.162 ' 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.162 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:48.163 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.072 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.072 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.072 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.072 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:50.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:50.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:50.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:50.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:07:50.073 00:07:50.073 --- 10.0.0.2 ping statistics --- 00:07:50.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.073 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:50.073 00:07:50.073 --- 10.0.0.1 ping statistics --- 00:07:50.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.073 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:50.073 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3706160 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3706160 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3706160 ']' 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.074 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.074 [2024-11-02 11:19:50.377858] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:07:50.074 [2024-11-02 11:19:50.377960] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.074 [2024-11-02 11:19:50.469254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.333 [2024-11-02 11:19:50.521512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.333 [2024-11-02 11:19:50.521586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.333 [2024-11-02 11:19:50.521603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.333 [2024-11-02 11:19:50.521616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.333 [2024-11-02 11:19:50.521629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.333 [2024-11-02 11:19:50.523358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.333 [2024-11-02 11:19:50.523435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.333 [2024-11-02 11:19:50.523485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.333 [2024-11-02 11:19:50.523489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.333 [2024-11-02 11:19:50.666963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.333 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.333 Malloc0 00:07:50.591 [2024-11-02 11:19:50.744077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.591 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.591 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3706318 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3706318 /var/tmp/bdevperf.sock 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3706318 ']' 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.592 { 00:07:50.592 "params": { 00:07:50.592 "name": "Nvme$subsystem", 00:07:50.592 "trtype": "$TEST_TRANSPORT", 00:07:50.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.592 "adrfam": "ipv4", 00:07:50.592 "trsvcid": "$NVMF_PORT", 00:07:50.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.592 "hdgst": ${hdgst:-false}, 00:07:50.592 "ddgst": ${ddgst:-false} 00:07:50.592 }, 00:07:50.592 "method": "bdev_nvme_attach_controller" 00:07:50.592 } 00:07:50.592 EOF 00:07:50.592 )") 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:50.592 11:19:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.592 "params": { 00:07:50.592 "name": "Nvme0", 00:07:50.592 "trtype": "tcp", 00:07:50.592 "traddr": "10.0.0.2", 00:07:50.592 "adrfam": "ipv4", 00:07:50.592 "trsvcid": "4420", 00:07:50.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.592 "hdgst": false, 00:07:50.592 "ddgst": false 00:07:50.592 }, 00:07:50.592 "method": "bdev_nvme_attach_controller" 00:07:50.592 }' 00:07:50.592 [2024-11-02 11:19:50.826556] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:07:50.592 [2024-11-02 11:19:50.826639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706318 ] 00:07:50.592 [2024-11-02 11:19:50.896038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.592 [2024-11-02 11:19:50.943098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.850 Running I/O for 10 seconds... 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:50.850 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.108 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.368 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.368 [2024-11-02 11:19:51.522684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.522992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.368 [2024-11-02 11:19:51.523101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.523528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7825b0 is same with the state(6) to be set 00:07:51.369 [2024-11-02 11:19:51.524346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.524985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.524998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.525013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.525027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.525042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.525055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.525070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.525084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.369 [2024-11-02 11:19:51.525099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.369 [2024-11-02 11:19:51.525112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.525978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.525993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.370 [2024-11-02 11:19:51.526247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.370 [2024-11-02 11:19:51.526269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.371 [2024-11-02 11:19:51.526286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.371 [2024-11-02 11:19:51.526300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.371 [2024-11-02 11:19:51.526322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.371 [2024-11-02 11:19:51.526335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.371 [2024-11-02 11:19:51.526350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.371 [2024-11-02 11:19:51.526368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.371 [2024-11-02 11:19:51.526383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1178bc0 is same with the state(6) to be set 00:07:51.371 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.371 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.371 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.371 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.371 [2024-11-02 11:19:51.527629] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:51.371 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:51.371 00:07:51.371 Latency(us) 00:07:51.371 [2024-11-02T10:19:51.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.371 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:51.371 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:51.371 Verification LBA range: start 0x0 length 0x400 00:07:51.371 Nvme0n1 : 0.41 1285.77 80.36 157.34 0.00 43107.92 7961.41 36894.34 00:07:51.371 [2024-11-02T10:19:51.773Z] =================================================================================================================== 00:07:51.371 [2024-11-02T10:19:51.773Z] Total : 1285.77 80.36 157.34 0.00 43107.92 7961.41 36894.34 00:07:51.371 [2024-11-02 11:19:51.529804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.371 [2024-11-02 11:19:51.529848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f970 (9): Bad file descriptor 00:07:51.371 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.371 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:51.371 [2024-11-02 11:19:51.540925] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:52.304 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3706318 00:07:52.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3706318) - No such process 00:07:52.304 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:52.304 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:52.304 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:52.304 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:52.304 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.305 { 00:07:52.305 "params": { 00:07:52.305 "name": "Nvme$subsystem", 00:07:52.305 "trtype": "$TEST_TRANSPORT", 00:07:52.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.305 "adrfam": "ipv4", 00:07:52.305 "trsvcid": "$NVMF_PORT", 00:07:52.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.305 "hdgst": ${hdgst:-false}, 00:07:52.305 "ddgst": ${ddgst:-false} 00:07:52.305 }, 00:07:52.305 "method": "bdev_nvme_attach_controller" 00:07:52.305 } 00:07:52.305 EOF 00:07:52.305 )") 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:52.305 11:19:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.305 "params": { 00:07:52.305 "name": "Nvme0", 00:07:52.305 "trtype": "tcp", 00:07:52.305 "traddr": "10.0.0.2", 00:07:52.305 "adrfam": "ipv4", 00:07:52.305 "trsvcid": "4420", 00:07:52.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.305 "hdgst": false, 00:07:52.305 "ddgst": false 00:07:52.305 }, 00:07:52.305 "method": "bdev_nvme_attach_controller" 00:07:52.305 }' 00:07:52.305 [2024-11-02 11:19:52.582116] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:07:52.305 [2024-11-02 11:19:52.582192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706484 ] 00:07:52.305 [2024-11-02 11:19:52.651973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.305 [2024-11-02 11:19:52.698330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.563 Running I/O for 1 seconds... 00:07:53.938 1536.00 IOPS, 96.00 MiB/s 00:07:53.938 Latency(us) 00:07:53.938 [2024-11-02T10:19:54.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.938 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.938 Verification LBA range: start 0x0 length 0x400 00:07:53.938 Nvme0n1 : 1.04 1541.38 96.34 0.00 0.00 40881.48 9514.86 35146.71 00:07:53.938 [2024-11-02T10:19:54.340Z] =================================================================================================================== 00:07:53.938 [2024-11-02T10:19:54.340Z] Total : 1541.38 96.34 0.00 0.00 40881.48 9514.86 35146.71 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.938 rmmod nvme_tcp 00:07:53.938 rmmod nvme_fabrics 00:07:53.938 rmmod nvme_keyring 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3706160 ']' 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3706160 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3706160 ']' 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3706160 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3706160 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3706160' 00:07:53.938 killing process with pid 3706160 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3706160 00:07:53.938 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3706160 00:07:54.197 [2024-11-02 11:19:54.466011] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.197 11:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:56.777 00:07:56.777 real 0m8.548s 00:07:56.777 user 0m18.677s 00:07:56.777 sys 0m2.657s 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.777 ************************************ 00:07:56.777 END TEST nvmf_host_management 00:07:56.777 ************************************ 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.777 ************************************ 00:07:56.777 START TEST nvmf_lvol 00:07:56.777 ************************************ 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.777 * Looking for test storage... 00:07:56.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.777 --rc genhtml_branch_coverage=1 00:07:56.777 --rc genhtml_function_coverage=1 00:07:56.777 --rc genhtml_legend=1 00:07:56.777 --rc geninfo_all_blocks=1 00:07:56.777 --rc geninfo_unexecuted_blocks=1 00:07:56.777 00:07:56.777 ' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.777 --rc genhtml_branch_coverage=1 00:07:56.777 --rc genhtml_function_coverage=1 00:07:56.777 --rc genhtml_legend=1 00:07:56.777 --rc geninfo_all_blocks=1 00:07:56.777 --rc geninfo_unexecuted_blocks=1 00:07:56.777 00:07:56.777 ' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.777 --rc genhtml_branch_coverage=1 00:07:56.777 --rc genhtml_function_coverage=1 00:07:56.777 --rc genhtml_legend=1 00:07:56.777 --rc geninfo_all_blocks=1 00:07:56.777 --rc geninfo_unexecuted_blocks=1 00:07:56.777 00:07:56.777 ' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.777 --rc genhtml_branch_coverage=1 00:07:56.777 --rc genhtml_function_coverage=1 00:07:56.777 --rc genhtml_legend=1 00:07:56.777 --rc geninfo_all_blocks=1 00:07:56.777 --rc geninfo_unexecuted_blocks=1 00:07:56.777 00:07:56.777 ' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.777 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.778 11:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:58.704 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:58.704 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.704 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:58.705 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:58.705 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:07:58.705 00:07:58.705 --- 10.0.0.2 ping statistics --- 00:07:58.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.705 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:07:58.705 00:07:58.705 --- 10.0.0.1 ping statistics --- 00:07:58.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.705 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3708689 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3708689 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3708689 ']' 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.705 11:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 [2024-11-02 11:19:58.968064] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:07:58.705 [2024-11-02 11:19:58.968149] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.705 [2024-11-02 11:19:59.047408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.705 [2024-11-02 11:19:59.095644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.705 [2024-11-02 11:19:59.095715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.705 [2024-11-02 11:19:59.095740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.705 [2024-11-02 11:19:59.095761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.705 [2024-11-02 11:19:59.095781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.705 [2024-11-02 11:19:59.097394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.705 [2024-11-02 11:19:59.097450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.705 [2024-11-02 11:19:59.097468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.964 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.222 [2024-11-02 11:19:59.495900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.222 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.486 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:59.486 11:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.743 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:59.743 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:00.001 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:00.566 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e93efc5e-ddfa-49be-aa68-d0f0232735b8 00:08:00.566 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e93efc5e-ddfa-49be-aa68-d0f0232735b8 lvol 20 00:08:00.566 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8dc8dd5c-39ed-4279-b60a-564b3a39ce17 00:08:00.566 11:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.824 11:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8dc8dd5c-39ed-4279-b60a-564b3a39ce17 00:08:01.390 11:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:01.390 [2024-11-02 11:20:01.764506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.390 11:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.955 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3709120 00:08:01.955 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:01.955 11:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:02.890 11:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8dc8dd5c-39ed-4279-b60a-564b3a39ce17 MY_SNAPSHOT 00:08:03.148 11:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=30777aba-d527-483b-9b76-570f815fe694 00:08:03.148 11:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8dc8dd5c-39ed-4279-b60a-564b3a39ce17 30 00:08:03.406 11:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 30777aba-d527-483b-9b76-570f815fe694 MY_CLONE 00:08:03.664 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9ff7a33c-be53-4a12-ac31-75203e27873e 00:08:03.664 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9ff7a33c-be53-4a12-ac31-75203e27873e 00:08:04.599 11:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3709120 00:08:12.712 Initializing NVMe Controllers 00:08:12.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:12.712 Controller IO queue size 128, less than required. 00:08:12.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:12.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:12.712 Initialization complete. Launching workers. 00:08:12.712 ======================================================== 00:08:12.712 Latency(us) 00:08:12.712 Device Information : IOPS MiB/s Average min max 00:08:12.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10660.20 41.64 12011.20 1753.14 69331.03 00:08:12.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9513.50 37.16 13465.30 6319.01 67069.31 00:08:12.712 ======================================================== 00:08:12.712 Total : 20173.70 78.80 12696.92 1753.14 69331.03 00:08:12.712 00:08:12.712 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.712 11:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8dc8dd5c-39ed-4279-b60a-564b3a39ce17 00:08:12.970 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e93efc5e-ddfa-49be-aa68-d0f0232735b8 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.305 rmmod nvme_tcp 00:08:13.305 rmmod nvme_fabrics 00:08:13.305 rmmod nvme_keyring 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3708689 ']' 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3708689 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3708689 ']' 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3708689 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3708689 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3708689' 00:08:13.305 killing process with pid 3708689 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3708689 00:08:13.305 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3708689 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.564 11:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.471 00:08:15.471 real 0m19.220s 00:08:15.471 user 1m4.430s 00:08:15.471 sys 0m6.092s 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.471 ************************************ 00:08:15.471 END TEST nvmf_lvol 00:08:15.471 ************************************ 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.471 11:20:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.730 ************************************ 00:08:15.730 START TEST nvmf_lvs_grow 00:08:15.730 ************************************ 00:08:15.730 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.730 * Looking for test storage... 00:08:15.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.730 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.730 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.730 11:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.730 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.731 11:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.264 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.265 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.265 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.265 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:18.265 00:08:18.265 --- 10.0.0.2 ping statistics --- 00:08:18.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.265 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:18.265 00:08:18.265 --- 10.0.0.1 ping statistics --- 00:08:18.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.265 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3712404 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3712404 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3712404 ']' 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.265 [2024-11-02 11:20:18.398952] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:08:18.265 [2024-11-02 11:20:18.399036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.265 [2024-11-02 11:20:18.477718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.265 [2024-11-02 11:20:18.524967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.265 [2024-11-02 11:20:18.525040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.265 [2024-11-02 11:20:18.525064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.265 [2024-11-02 11:20:18.525086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.265 [2024-11-02 11:20:18.525104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.265 [2024-11-02 11:20:18.525862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.265 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.523 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.523 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:18.523 [2024-11-02 11:20:18.920936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.781 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:18.781 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:18.781 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.781 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.781 ************************************ 00:08:18.781 START TEST lvs_grow_clean 00:08:18.782 ************************************ 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.782 11:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.040 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:19.040 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.298 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:19.298 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:19.298 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.555 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.555 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.555 11:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c lvol 150 00:08:19.815 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5db92778-d1f9-43e8-9ffe-bfab005bf6c8 00:08:19.815 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:19.815 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:20.077 [2024-11-02 11:20:20.366735] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:20.077 [2024-11-02 11:20:20.366833] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:20.077 true 00:08:20.077 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:20.077 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:20.334 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:20.334 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.592 11:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5db92778-d1f9-43e8-9ffe-bfab005bf6c8 00:08:21.158 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.158 [2024-11-02 11:20:21.514281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.158 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3712866 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3712866 /var/tmp/bdevperf.sock 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3712866 ']' 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:21.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.724 11:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:21.724 [2024-11-02 11:20:21.883401] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:08:21.724 [2024-11-02 11:20:21.883475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712866 ] 00:08:21.724 [2024-11-02 11:20:21.955498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.724 [2024-11-02 11:20:22.006817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.982 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.982 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:21.982 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:22.239 Nvme0n1 00:08:22.239 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:22.498 [ 00:08:22.498 { 00:08:22.498 "name": "Nvme0n1", 00:08:22.498 "aliases": [ 00:08:22.498 "5db92778-d1f9-43e8-9ffe-bfab005bf6c8" 00:08:22.498 ], 00:08:22.498 "product_name": "NVMe disk", 00:08:22.498 "block_size": 4096, 00:08:22.498 "num_blocks": 38912, 00:08:22.498 "uuid": "5db92778-d1f9-43e8-9ffe-bfab005bf6c8", 00:08:22.498 "numa_id": 0, 00:08:22.498 "assigned_rate_limits": { 00:08:22.498 "rw_ios_per_sec": 0, 00:08:22.498 "rw_mbytes_per_sec": 0, 00:08:22.498 "r_mbytes_per_sec": 0, 00:08:22.498 "w_mbytes_per_sec": 0 00:08:22.498 }, 00:08:22.498 "claimed": false, 00:08:22.498 "zoned": false, 00:08:22.498 "supported_io_types": { 00:08:22.498 "read": true, 00:08:22.498 "write": true, 00:08:22.498 "unmap": true, 00:08:22.498 "flush": true, 00:08:22.498 "reset": true, 00:08:22.498 "nvme_admin": true, 00:08:22.498 "nvme_io": true, 00:08:22.498 "nvme_io_md": false, 00:08:22.498 "write_zeroes": true, 00:08:22.498 "zcopy": false, 00:08:22.498 "get_zone_info": false, 00:08:22.498 "zone_management": false, 00:08:22.498 "zone_append": false, 00:08:22.498 "compare": true, 00:08:22.498 "compare_and_write": true, 00:08:22.498 "abort": true, 00:08:22.498 "seek_hole": false, 00:08:22.498 "seek_data": false, 00:08:22.498 "copy": true, 00:08:22.498 "nvme_iov_md": false 00:08:22.498 }, 00:08:22.498 "memory_domains": [ 00:08:22.498 { 00:08:22.498 "dma_device_id": "system", 00:08:22.498 "dma_device_type": 1 00:08:22.498 } 00:08:22.498 ], 00:08:22.498 "driver_specific": { 00:08:22.498 "nvme": [ 00:08:22.498 { 00:08:22.498 "trid": { 00:08:22.498 "trtype": "TCP", 00:08:22.498 "adrfam": "IPv4", 00:08:22.498 "traddr": "10.0.0.2", 00:08:22.498 "trsvcid": "4420", 00:08:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:22.498 }, 00:08:22.498 "ctrlr_data": { 00:08:22.498 "cntlid": 1, 00:08:22.498 "vendor_id": "0x8086", 00:08:22.498 "model_number": "SPDK bdev Controller", 00:08:22.498 "serial_number": "SPDK0", 00:08:22.498 "firmware_revision": "25.01", 00:08:22.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.498 "oacs": { 00:08:22.498 "security": 0, 00:08:22.498 "format": 0, 00:08:22.498 "firmware": 0, 00:08:22.498 "ns_manage": 0 00:08:22.498 }, 00:08:22.498 "multi_ctrlr": true, 00:08:22.498 "ana_reporting": false 00:08:22.498 }, 00:08:22.498 "vs": { 00:08:22.498 "nvme_version": "1.3" 00:08:22.498 }, 00:08:22.498 "ns_data": { 00:08:22.498 "id": 1, 00:08:22.498 "can_share": true 00:08:22.498 } 00:08:22.498 } 00:08:22.498 ], 00:08:22.498 "mp_policy": "active_passive" 00:08:22.498 } 00:08:22.498 } 00:08:22.498 ] 00:08:22.498 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3713003 00:08:22.498 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:22.498 11:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.756 Running I/O for 10 seconds... 00:08:23.691 Latency(us) 00:08:23.691 [2024-11-02T10:20:24.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.691 Nvme0n1 : 1.00 14169.00 55.35 0.00 0.00 0.00 0.00 0.00 00:08:23.691 [2024-11-02T10:20:24.093Z] =================================================================================================================== 00:08:23.691 [2024-11-02T10:20:24.093Z] Total : 14169.00 55.35 0.00 0.00 0.00 0.00 0.00 00:08:23.691 00:08:24.626 11:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:24.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.627 Nvme0n1 : 2.00 14296.00 55.84 0.00 0.00 0.00 0.00 0.00 00:08:24.627 [2024-11-02T10:20:25.029Z] =================================================================================================================== 00:08:24.627 [2024-11-02T10:20:25.029Z] Total : 14296.00 55.84 0.00 0.00 0.00 0.00 0.00 00:08:24.627 00:08:24.884 true 00:08:24.884 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:24.884 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:25.142 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:25.142 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:25.142 11:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3713003 00:08:25.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.709 Nvme0n1 : 3.00 14380.33 56.17 0.00 0.00 0.00 0.00 0.00 00:08:25.709 [2024-11-02T10:20:26.111Z] =================================================================================================================== 00:08:25.709 [2024-11-02T10:20:26.111Z] Total : 14380.33 56.17 0.00 0.00 0.00 0.00 0.00 00:08:25.709 00:08:26.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.645 Nvme0n1 : 4.00 14453.50 56.46 0.00 0.00 0.00 0.00 0.00 00:08:26.645 [2024-11-02T10:20:27.047Z] =================================================================================================================== 00:08:26.645 [2024-11-02T10:20:27.047Z] Total : 14453.50 56.46 0.00 0.00 0.00 0.00 0.00 00:08:26.645 00:08:27.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.581 Nvme0n1 : 5.00 14485.00 56.58 0.00 0.00 0.00 0.00 0.00 00:08:27.581 [2024-11-02T10:20:27.983Z] =================================================================================================================== 00:08:27.581 [2024-11-02T10:20:27.983Z] Total : 14485.00 56.58 0.00 0.00 0.00 0.00 0.00 00:08:27.581 00:08:29.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.030 Nvme0n1 : 6.00 14527.00 56.75 0.00 0.00 0.00 0.00 0.00 00:08:29.030 [2024-11-02T10:20:29.432Z] =================================================================================================================== 00:08:29.030 [2024-11-02T10:20:29.432Z] Total : 14527.00 56.75 0.00 0.00 0.00 0.00 0.00 00:08:29.030 00:08:29.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.625 Nvme0n1 : 7.00 14530.71 56.76 0.00 0.00 0.00 0.00 0.00 00:08:29.625 [2024-11-02T10:20:30.027Z] =================================================================================================================== 00:08:29.625 [2024-11-02T10:20:30.027Z] Total : 14530.71 56.76 0.00 0.00 0.00 0.00 0.00 00:08:29.625 00:08:31.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.000 Nvme0n1 : 8.00 14548.50 56.83 0.00 0.00 0.00 0.00 0.00 00:08:31.000 [2024-11-02T10:20:31.402Z] =================================================================================================================== 00:08:31.000 [2024-11-02T10:20:31.402Z] Total : 14548.50 56.83 0.00 0.00 0.00 0.00 0.00 00:08:31.000 00:08:31.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.934 Nvme0n1 : 9.00 14576.11 56.94 0.00 0.00 0.00 0.00 0.00 00:08:31.934 [2024-11-02T10:20:32.336Z] =================================================================================================================== 00:08:31.934 [2024-11-02T10:20:32.336Z] Total : 14576.11 56.94 0.00 0.00 0.00 0.00 0.00 00:08:31.934 00:08:32.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.868 Nvme0n1 : 10.00 14591.70 57.00 0.00 0.00 0.00 0.00 0.00 00:08:32.868 [2024-11-02T10:20:33.270Z] =================================================================================================================== 00:08:32.868 [2024-11-02T10:20:33.270Z] Total : 14591.70 57.00 0.00 0.00 0.00 0.00 0.00 00:08:32.868 00:08:32.868 00:08:32.868 Latency(us) 00:08:32.868 [2024-11-02T10:20:33.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.868 Nvme0n1 : 10.01 14592.91 57.00 0.00 0.00 8766.53 2245.21 19806.44 00:08:32.868 [2024-11-02T10:20:33.270Z] =================================================================================================================== 00:08:32.868 [2024-11-02T10:20:33.270Z] Total : 14592.91 57.00 0.00 0.00 8766.53 2245.21 19806.44 00:08:32.868 { 00:08:32.868 "results": [ 00:08:32.868 { 00:08:32.868 "job": "Nvme0n1", 00:08:32.868 "core_mask": "0x2", 00:08:32.868 "workload": "randwrite", 00:08:32.868 "status": "finished", 00:08:32.868 "queue_depth": 128, 00:08:32.868 "io_size": 4096, 00:08:32.868 "runtime": 10.007939, 00:08:32.868 "iops": 14592.914685031554, 00:08:32.868 "mibps": 57.00357298840451, 00:08:32.868 "io_failed": 0, 00:08:32.868 "io_timeout": 0, 00:08:32.868 "avg_latency_us": 8766.526424757463, 00:08:32.868 "min_latency_us": 2245.214814814815, 00:08:32.868 "max_latency_us": 19806.435555555556 00:08:32.868 } 00:08:32.868 ], 00:08:32.868 "core_count": 1 00:08:32.868 } 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3712866 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3712866 ']' 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3712866 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3712866 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3712866' 00:08:32.868 killing process with pid 3712866 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3712866 00:08:32.868 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.868 00:08:32.868 Latency(us) 00:08:32.868 [2024-11-02T10:20:33.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.868 [2024-11-02T10:20:33.270Z] =================================================================================================================== 00:08:32.868 [2024-11-02T10:20:33.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3712866 00:08:32.868 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.127 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.692 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:33.692 11:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:33.692 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:33.692 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:33.692 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:33.950 [2024-11-02 11:20:34.329186] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:34.208 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:34.466 request: 00:08:34.466 { 00:08:34.466 "uuid": "29e9acc8-3c49-4897-8b1b-76796a3ab77c", 00:08:34.466 "method": "bdev_lvol_get_lvstores", 00:08:34.466 "req_id": 1 00:08:34.466 } 00:08:34.466 Got JSON-RPC error response 00:08:34.466 response: 00:08:34.466 { 00:08:34.466 "code": -19, 00:08:34.466 "message": "No such device" 00:08:34.466 } 00:08:34.466 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:34.466 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.466 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.466 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.466 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:34.724 aio_bdev 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5db92778-d1f9-43e8-9ffe-bfab005bf6c8 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=5db92778-d1f9-43e8-9ffe-bfab005bf6c8 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:34.724 11:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:34.982 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5db92778-d1f9-43e8-9ffe-bfab005bf6c8 -t 2000 00:08:35.241 [ 00:08:35.241 { 00:08:35.241 "name": "5db92778-d1f9-43e8-9ffe-bfab005bf6c8", 00:08:35.241 "aliases": [ 00:08:35.241 "lvs/lvol" 00:08:35.241 ], 00:08:35.241 "product_name": "Logical Volume", 00:08:35.241 "block_size": 4096, 00:08:35.241 "num_blocks": 38912, 00:08:35.241 "uuid": "5db92778-d1f9-43e8-9ffe-bfab005bf6c8", 00:08:35.241 "assigned_rate_limits": { 00:08:35.241 "rw_ios_per_sec": 0, 00:08:35.241 "rw_mbytes_per_sec": 0, 00:08:35.241 "r_mbytes_per_sec": 0, 00:08:35.241 "w_mbytes_per_sec": 0 00:08:35.241 }, 00:08:35.241 "claimed": false, 00:08:35.241 "zoned": false, 00:08:35.241 "supported_io_types": { 00:08:35.241 "read": true, 00:08:35.241 "write": true, 00:08:35.241 "unmap": true, 00:08:35.241 "flush": false, 00:08:35.241 "reset": true, 00:08:35.241 "nvme_admin": false, 00:08:35.241 "nvme_io": false, 00:08:35.241 "nvme_io_md": false, 00:08:35.241 "write_zeroes": true, 00:08:35.241 "zcopy": false, 00:08:35.241 "get_zone_info": false, 00:08:35.241 "zone_management": false, 00:08:35.241 "zone_append": false, 00:08:35.241 "compare": false, 00:08:35.241 "compare_and_write": false, 00:08:35.241 "abort": false, 00:08:35.241 "seek_hole": true, 00:08:35.241 "seek_data": true, 00:08:35.241 "copy": false, 00:08:35.241 "nvme_iov_md": false 00:08:35.241 }, 00:08:35.241 "driver_specific": { 00:08:35.241 "lvol": { 00:08:35.241 "lvol_store_uuid": "29e9acc8-3c49-4897-8b1b-76796a3ab77c", 00:08:35.241 "base_bdev": "aio_bdev", 00:08:35.241 "thin_provision": false, 00:08:35.241 "num_allocated_clusters": 38, 00:08:35.241 "snapshot": false, 00:08:35.241 "clone": false, 00:08:35.241 "esnap_clone": false 00:08:35.241 } 00:08:35.241 } 00:08:35.241 } 00:08:35.241 ] 00:08:35.241 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:35.241 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:35.241 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:35.499 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:35.499 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:35.499 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:35.758 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:35.758 11:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5db92778-d1f9-43e8-9ffe-bfab005bf6c8 00:08:36.016 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29e9acc8-3c49-4897-8b1b-76796a3ab77c 00:08:36.274 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.532 00:08:36.532 real 0m17.871s 00:08:36.532 user 0m17.415s 00:08:36.532 sys 0m1.859s 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:36.532 ************************************ 00:08:36.532 END TEST lvs_grow_clean 00:08:36.532 ************************************ 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.532 ************************************ 00:08:36.532 START TEST lvs_grow_dirty 00:08:36.532 ************************************ 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:36.532 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:36.533 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.533 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:36.533 11:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.099 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:37.099 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:37.099 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:37.099 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:37.099 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:37.357 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:37.357 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:37.357 11:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 lvol 150 00:08:37.616 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:37.616 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.874 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:38.131 [2024-11-02 11:20:38.276803] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:38.131 [2024-11-02 11:20:38.276912] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:38.131 true 00:08:38.131 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:38.131 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:38.389 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:38.389 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.647 11:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:38.906 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.164 [2024-11-02 11:20:39.368084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.164 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3715056 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3715056 /var/tmp/bdevperf.sock 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3715056 ']' 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.426 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.426 [2024-11-02 11:20:39.692469] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:08:39.426 [2024-11-02 11:20:39.692544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715056 ] 00:08:39.426 [2024-11-02 11:20:39.762643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.426 [2024-11-02 11:20:39.811546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.683 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.683 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:39.683 11:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:40.248 Nvme0n1 00:08:40.248 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:40.506 [ 00:08:40.506 { 00:08:40.506 "name": "Nvme0n1", 00:08:40.506 "aliases": [ 00:08:40.506 "3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf" 00:08:40.506 ], 00:08:40.506 "product_name": "NVMe disk", 00:08:40.506 "block_size": 4096, 00:08:40.506 "num_blocks": 38912, 00:08:40.506 "uuid": "3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf", 00:08:40.506 "numa_id": 0, 00:08:40.506 "assigned_rate_limits": { 00:08:40.506 "rw_ios_per_sec": 0, 00:08:40.506 "rw_mbytes_per_sec": 0, 00:08:40.506 "r_mbytes_per_sec": 0, 00:08:40.506 "w_mbytes_per_sec": 0 00:08:40.506 }, 00:08:40.506 "claimed": false, 00:08:40.506 "zoned": false, 00:08:40.506 "supported_io_types": { 00:08:40.506 "read": true, 00:08:40.506 "write": true, 00:08:40.506 "unmap": true, 00:08:40.506 "flush": true, 00:08:40.506 "reset": true, 00:08:40.506 "nvme_admin": true, 00:08:40.506 "nvme_io": true, 00:08:40.506 "nvme_io_md": false, 00:08:40.506 "write_zeroes": true, 00:08:40.506 "zcopy": false, 00:08:40.506 "get_zone_info": false, 00:08:40.506 "zone_management": false, 00:08:40.506 "zone_append": false, 00:08:40.506 "compare": true, 00:08:40.506 "compare_and_write": true, 00:08:40.506 "abort": true, 00:08:40.506 "seek_hole": false, 00:08:40.506 "seek_data": false, 00:08:40.506 "copy": true, 00:08:40.506 "nvme_iov_md": false 00:08:40.506 }, 00:08:40.506 "memory_domains": [ 00:08:40.506 { 00:08:40.506 "dma_device_id": "system", 00:08:40.506 "dma_device_type": 1 00:08:40.506 } 00:08:40.506 ], 00:08:40.506 "driver_specific": { 00:08:40.506 "nvme": [ 00:08:40.506 { 00:08:40.506 "trid": { 00:08:40.507 "trtype": "TCP", 00:08:40.507 "adrfam": "IPv4", 00:08:40.507 "traddr": "10.0.0.2", 00:08:40.507 "trsvcid": "4420", 00:08:40.507 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:40.507 }, 00:08:40.507 "ctrlr_data": { 00:08:40.507 "cntlid": 1, 00:08:40.507 "vendor_id": "0x8086", 00:08:40.507 "model_number": "SPDK bdev Controller", 00:08:40.507 "serial_number": "SPDK0", 00:08:40.507 "firmware_revision": "25.01", 00:08:40.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.507 "oacs": { 00:08:40.507 "security": 0, 00:08:40.507 "format": 0, 00:08:40.507 "firmware": 0, 00:08:40.507 "ns_manage": 0 00:08:40.507 }, 00:08:40.507 "multi_ctrlr": true, 00:08:40.507 "ana_reporting": false 00:08:40.507 }, 00:08:40.507 "vs": { 00:08:40.507 "nvme_version": "1.3" 00:08:40.507 }, 00:08:40.507 "ns_data": { 00:08:40.507 "id": 1, 00:08:40.507 "can_share": true 00:08:40.507 } 00:08:40.507 } 00:08:40.507 ], 00:08:40.507 "mp_policy": "active_passive" 00:08:40.507 } 00:08:40.507 } 00:08:40.507 ] 00:08:40.507 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3715194 00:08:40.507 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:40.507 11:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:40.507 Running I/O for 10 seconds... 00:08:41.441 Latency(us) 00:08:41.441 [2024-11-02T10:20:41.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.441 Nvme0n1 : 1.00 13980.00 54.61 0.00 0.00 0.00 0.00 0.00 00:08:41.441 [2024-11-02T10:20:41.843Z] =================================================================================================================== 00:08:41.441 [2024-11-02T10:20:41.843Z] Total : 13980.00 54.61 0.00 0.00 0.00 0.00 0.00 00:08:41.441 00:08:42.375 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:42.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.633 Nvme0n1 : 2.00 14174.00 55.37 0.00 0.00 0.00 0.00 0.00 00:08:42.633 [2024-11-02T10:20:43.035Z] =================================================================================================================== 00:08:42.633 [2024-11-02T10:20:43.035Z] Total : 14174.00 55.37 0.00 0.00 0.00 0.00 0.00 00:08:42.633 00:08:42.633 true 00:08:42.633 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:42.633 11:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.891 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.891 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.891 11:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3715194 00:08:43.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.457 Nvme0n1 : 3.00 14318.67 55.93 0.00 0.00 0.00 0.00 0.00 00:08:43.457 [2024-11-02T10:20:43.859Z] =================================================================================================================== 00:08:43.457 [2024-11-02T10:20:43.859Z] Total : 14318.67 55.93 0.00 0.00 0.00 0.00 0.00 00:08:43.457 00:08:44.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.404 Nvme0n1 : 4.00 14408.00 56.28 0.00 0.00 0.00 0.00 0.00 00:08:44.404 [2024-11-02T10:20:44.806Z] =================================================================================================================== 00:08:44.404 [2024-11-02T10:20:44.806Z] Total : 14408.00 56.28 0.00 0.00 0.00 0.00 0.00 00:08:44.404 00:08:45.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.779 Nvme0n1 : 5.00 14486.20 56.59 0.00 0.00 0.00 0.00 0.00 00:08:45.779 [2024-11-02T10:20:46.181Z] =================================================================================================================== 00:08:45.779 [2024-11-02T10:20:46.181Z] Total : 14486.20 56.59 0.00 0.00 0.00 0.00 0.00 00:08:45.779 00:08:46.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.714 Nvme0n1 : 6.00 14541.00 56.80 0.00 0.00 0.00 0.00 0.00 00:08:46.714 [2024-11-02T10:20:47.116Z] =================================================================================================================== 00:08:46.714 [2024-11-02T10:20:47.116Z] Total : 14541.00 56.80 0.00 0.00 0.00 0.00 0.00 00:08:46.714 00:08:47.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.650 Nvme0n1 : 7.00 14578.00 56.95 0.00 0.00 0.00 0.00 0.00 00:08:47.650 [2024-11-02T10:20:48.052Z] =================================================================================================================== 00:08:47.650 [2024-11-02T10:20:48.052Z] Total : 14578.00 56.95 0.00 0.00 0.00 0.00 0.00 00:08:47.650 00:08:48.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.584 Nvme0n1 : 8.00 14614.25 57.09 0.00 0.00 0.00 0.00 0.00 00:08:48.584 [2024-11-02T10:20:48.986Z] =================================================================================================================== 00:08:48.584 [2024-11-02T10:20:48.986Z] Total : 14614.25 57.09 0.00 0.00 0.00 0.00 0.00 00:08:48.584 00:08:49.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.519 Nvme0n1 : 9.00 14644.00 57.20 0.00 0.00 0.00 0.00 0.00 00:08:49.519 [2024-11-02T10:20:49.921Z] =================================================================================================================== 00:08:49.519 [2024-11-02T10:20:49.921Z] Total : 14644.00 57.20 0.00 0.00 0.00 0.00 0.00 00:08:49.519 00:08:50.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.454 Nvme0n1 : 10.00 14666.10 57.29 0.00 0.00 0.00 0.00 0.00 00:08:50.454 [2024-11-02T10:20:50.856Z] =================================================================================================================== 00:08:50.454 [2024-11-02T10:20:50.856Z] Total : 14666.10 57.29 0.00 0.00 0.00 0.00 0.00 00:08:50.454 00:08:50.454 00:08:50.454 Latency(us) 00:08:50.454 [2024-11-02T10:20:50.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.454 Nvme0n1 : 10.01 14669.62 57.30 0.00 0.00 8720.82 4684.61 20000.62 00:08:50.454 [2024-11-02T10:20:50.856Z] =================================================================================================================== 00:08:50.454 [2024-11-02T10:20:50.856Z] Total : 14669.62 57.30 0.00 0.00 8720.82 4684.61 20000.62 00:08:50.454 { 00:08:50.454 "results": [ 00:08:50.455 { 00:08:50.455 "job": "Nvme0n1", 00:08:50.455 "core_mask": "0x2", 00:08:50.455 "workload": "randwrite", 00:08:50.455 "status": "finished", 00:08:50.455 "queue_depth": 128, 00:08:50.455 "io_size": 4096, 00:08:50.455 "runtime": 10.006325, 00:08:50.455 "iops": 14669.621464423752, 00:08:50.455 "mibps": 57.30320884540528, 00:08:50.455 "io_failed": 0, 00:08:50.455 "io_timeout": 0, 00:08:50.455 "avg_latency_us": 8720.818302996264, 00:08:50.455 "min_latency_us": 4684.61037037037, 00:08:50.455 "max_latency_us": 20000.616296296295 00:08:50.455 } 00:08:50.455 ], 00:08:50.455 "core_count": 1 00:08:50.455 } 00:08:50.455 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3715056 00:08:50.455 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3715056 ']' 00:08:50.455 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3715056 00:08:50.455 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:50.455 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:50.455 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3715056 00:08:50.713 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:50.713 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:50.713 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3715056' 00:08:50.713 killing process with pid 3715056 00:08:50.713 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3715056 00:08:50.713 Received shutdown signal, test time was about 10.000000 seconds 00:08:50.713 00:08:50.713 Latency(us) 00:08:50.713 [2024-11-02T10:20:51.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.713 [2024-11-02T10:20:51.115Z] =================================================================================================================== 00:08:50.713 [2024-11-02T10:20:51.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:50.714 11:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3715056 00:08:50.714 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.972 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:51.230 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:51.230 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:51.488 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:51.488 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:51.488 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3712404 00:08:51.488 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3712404 00:08:51.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3712404 Killed "${NVMF_APP[@]}" "$@" 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3716523 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3716523 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3716523 ']' 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.747 11:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.747 [2024-11-02 11:20:51.976366] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:08:51.747 [2024-11-02 11:20:51.976447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.747 [2024-11-02 11:20:52.056822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.747 [2024-11-02 11:20:52.105323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.747 [2024-11-02 11:20:52.105373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.747 [2024-11-02 11:20:52.105394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.747 [2024-11-02 11:20:52.105411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.747 [2024-11-02 11:20:52.105426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.747 [2024-11-02 11:20:52.106086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.006 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.265 [2024-11-02 11:20:52.491126] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:52.265 [2024-11-02 11:20:52.491306] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:52.265 [2024-11-02 11:20:52.491382] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:52.265 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:52.523 11:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf -t 2000 00:08:52.781 [ 00:08:52.781 { 00:08:52.781 "name": "3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf", 00:08:52.781 "aliases": [ 00:08:52.781 "lvs/lvol" 00:08:52.781 ], 00:08:52.781 "product_name": "Logical Volume", 00:08:52.781 "block_size": 4096, 00:08:52.781 "num_blocks": 38912, 00:08:52.781 "uuid": "3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf", 00:08:52.781 "assigned_rate_limits": { 00:08:52.781 "rw_ios_per_sec": 0, 00:08:52.781 "rw_mbytes_per_sec": 0, 00:08:52.781 "r_mbytes_per_sec": 0, 00:08:52.781 "w_mbytes_per_sec": 0 00:08:52.781 }, 00:08:52.781 "claimed": false, 00:08:52.781 "zoned": false, 00:08:52.781 "supported_io_types": { 00:08:52.781 "read": true, 00:08:52.781 "write": true, 00:08:52.781 "unmap": true, 00:08:52.781 "flush": false, 00:08:52.781 "reset": true, 00:08:52.781 "nvme_admin": false, 00:08:52.781 "nvme_io": false, 00:08:52.781 "nvme_io_md": false, 00:08:52.781 "write_zeroes": true, 00:08:52.781 "zcopy": false, 00:08:52.781 "get_zone_info": false, 00:08:52.781 "zone_management": false, 00:08:52.781 "zone_append": false, 00:08:52.781 "compare": false, 00:08:52.781 "compare_and_write": false, 00:08:52.781 "abort": false, 00:08:52.781 "seek_hole": true, 00:08:52.781 "seek_data": true, 00:08:52.781 "copy": false, 00:08:52.781 "nvme_iov_md": false 00:08:52.781 }, 00:08:52.781 "driver_specific": { 00:08:52.781 "lvol": { 00:08:52.781 "lvol_store_uuid": "b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8", 00:08:52.781 "base_bdev": "aio_bdev", 00:08:52.781 "thin_provision": false, 00:08:52.781 "num_allocated_clusters": 38, 00:08:52.781 "snapshot": false, 00:08:52.782 "clone": false, 00:08:52.782 "esnap_clone": false 00:08:52.782 } 00:08:52.782 } 00:08:52.782 } 00:08:52.782 ] 00:08:52.782 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:52.782 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:52.782 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:53.040 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:53.040 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:53.040 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:53.298 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:53.298 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.557 [2024-11-02 11:20:53.896679] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:53.557 11:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:53.815 request: 00:08:53.815 { 00:08:53.815 "uuid": "b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8", 00:08:53.815 "method": "bdev_lvol_get_lvstores", 00:08:53.815 "req_id": 1 00:08:53.815 } 00:08:53.815 Got JSON-RPC error response 00:08:53.815 response: 00:08:53.815 { 00:08:53.815 "code": -19, 00:08:53.815 "message": "No such device" 00:08:53.815 } 00:08:53.815 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:53.815 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.815 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.815 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.815 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.073 aio_bdev 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:54.073 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.364 11:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf -t 2000 00:08:54.647 [ 00:08:54.647 { 00:08:54.647 "name": "3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf", 00:08:54.647 "aliases": [ 00:08:54.647 "lvs/lvol" 00:08:54.647 ], 00:08:54.647 "product_name": "Logical Volume", 00:08:54.647 "block_size": 4096, 00:08:54.647 "num_blocks": 38912, 00:08:54.647 "uuid": "3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf", 00:08:54.647 "assigned_rate_limits": { 00:08:54.647 "rw_ios_per_sec": 0, 00:08:54.647 "rw_mbytes_per_sec": 0, 00:08:54.647 "r_mbytes_per_sec": 0, 00:08:54.647 "w_mbytes_per_sec": 0 00:08:54.647 }, 00:08:54.647 "claimed": false, 00:08:54.647 "zoned": false, 00:08:54.647 "supported_io_types": { 00:08:54.647 "read": true, 00:08:54.648 "write": true, 00:08:54.648 "unmap": true, 00:08:54.648 "flush": false, 00:08:54.648 "reset": true, 00:08:54.648 "nvme_admin": false, 00:08:54.648 "nvme_io": false, 00:08:54.648 "nvme_io_md": false, 00:08:54.648 "write_zeroes": true, 00:08:54.648 "zcopy": false, 00:08:54.648 "get_zone_info": false, 00:08:54.648 "zone_management": false, 00:08:54.648 "zone_append": false, 00:08:54.648 "compare": false, 00:08:54.648 "compare_and_write": false, 00:08:54.648 "abort": false, 00:08:54.648 "seek_hole": true, 00:08:54.648 "seek_data": true, 00:08:54.648 "copy": false, 00:08:54.648 "nvme_iov_md": false 00:08:54.648 }, 00:08:54.648 "driver_specific": { 00:08:54.648 "lvol": { 00:08:54.648 "lvol_store_uuid": "b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8", 00:08:54.648 "base_bdev": "aio_bdev", 00:08:54.648 "thin_provision": false, 00:08:54.648 "num_allocated_clusters": 38, 00:08:54.648 "snapshot": false, 00:08:54.648 "clone": false, 00:08:54.648 "esnap_clone": false 00:08:54.648 } 00:08:54.648 } 00:08:54.648 } 00:08:54.648 ] 00:08:54.648 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:54.648 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:54.648 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:54.906 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:54.906 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:54.906 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:55.473 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:55.473 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3f19f2e6-ea9f-48cf-b8a4-cfde154bb7cf 00:08:55.731 11:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3da3dc5-a230-4b41-9a87-a1b77c1b8fb8 00:08:55.989 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.248 00:08:56.248 real 0m19.560s 00:08:56.248 user 0m49.606s 00:08:56.248 sys 0m4.536s 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.248 ************************************ 00:08:56.248 END TEST lvs_grow_dirty 00:08:56.248 ************************************ 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:56.248 nvmf_trace.0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.248 rmmod nvme_tcp 00:08:56.248 rmmod nvme_fabrics 00:08:56.248 rmmod nvme_keyring 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3716523 ']' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3716523 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3716523 ']' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3716523 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3716523 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3716523' 00:08:56.248 killing process with pid 3716523 00:08:56.248 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3716523 00:08:56.249 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3716523 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.508 11:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.042 00:08:59.042 real 0m43.004s 00:08:59.042 user 1m13.169s 00:08:59.042 sys 0m8.422s 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 ************************************ 00:08:59.042 END TEST nvmf_lvs_grow 00:08:59.042 ************************************ 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.042 ************************************ 00:08:59.042 START TEST nvmf_bdev_io_wait 00:08:59.042 ************************************ 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.042 * Looking for test storage... 00:08:59.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.042 11:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.042 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.042 --rc genhtml_branch_coverage=1 00:08:59.042 --rc genhtml_function_coverage=1 00:08:59.042 --rc genhtml_legend=1 00:08:59.042 --rc geninfo_all_blocks=1 00:08:59.042 --rc geninfo_unexecuted_blocks=1 00:08:59.042 00:08:59.043 ' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.043 --rc genhtml_branch_coverage=1 00:08:59.043 --rc genhtml_function_coverage=1 00:08:59.043 --rc genhtml_legend=1 00:08:59.043 --rc geninfo_all_blocks=1 00:08:59.043 --rc geninfo_unexecuted_blocks=1 00:08:59.043 00:08:59.043 ' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.043 --rc genhtml_branch_coverage=1 00:08:59.043 --rc genhtml_function_coverage=1 00:08:59.043 --rc genhtml_legend=1 00:08:59.043 --rc geninfo_all_blocks=1 00:08:59.043 --rc geninfo_unexecuted_blocks=1 00:08:59.043 00:08:59.043 ' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.043 --rc genhtml_branch_coverage=1 00:08:59.043 --rc genhtml_function_coverage=1 00:08:59.043 --rc genhtml_legend=1 00:08:59.043 --rc geninfo_all_blocks=1 00:08:59.043 --rc geninfo_unexecuted_blocks=1 00:08:59.043 00:08:59.043 ' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.043 11:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.946 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.946 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.946 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.946 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:00.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:00.947 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:00.947 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:00.947 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.947 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:09:00.948 00:09:00.948 --- 10.0.0.2 ping statistics --- 00:09:00.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.948 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:09:00.948 00:09:00.948 --- 10.0.0.1 ping statistics --- 00:09:00.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.948 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3719142 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3719142 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3719142 ']' 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:00.948 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 [2024-11-02 11:21:01.233426] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:00.948 [2024-11-02 11:21:01.233498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.948 [2024-11-02 11:21:01.315546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.206 [2024-11-02 11:21:01.368892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.206 [2024-11-02 11:21:01.368947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.206 [2024-11-02 11:21:01.368974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.206 [2024-11-02 11:21:01.368994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.206 [2024-11-02 11:21:01.369013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.206 [2024-11-02 11:21:01.370812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.206 [2024-11-02 11:21:01.370882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.206 [2024-11-02 11:21:01.370972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.207 [2024-11-02 11:21:01.370975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 [2024-11-02 11:21:01.570136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 Malloc0 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:01.466 [2024-11-02 11:21:01.623186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3719199 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3719201 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.466 { 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme$subsystem", 00:09:01.466 "trtype": "$TEST_TRANSPORT", 00:09:01.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "$NVMF_PORT", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.466 "hdgst": ${hdgst:-false}, 00:09:01.466 "ddgst": ${ddgst:-false} 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.466 } 00:09:01.466 EOF 00:09:01.466 )") 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3719203 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.466 { 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme$subsystem", 00:09:01.466 "trtype": "$TEST_TRANSPORT", 00:09:01.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "$NVMF_PORT", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.466 "hdgst": ${hdgst:-false}, 00:09:01.466 "ddgst": ${ddgst:-false} 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.466 } 00:09:01.466 EOF 00:09:01.466 )") 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3719206 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.466 { 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme$subsystem", 00:09:01.466 "trtype": "$TEST_TRANSPORT", 00:09:01.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "$NVMF_PORT", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.466 "hdgst": ${hdgst:-false}, 00:09:01.466 "ddgst": ${ddgst:-false} 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.466 } 00:09:01.466 EOF 00:09:01.466 )") 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.466 { 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme$subsystem", 00:09:01.466 "trtype": "$TEST_TRANSPORT", 00:09:01.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "$NVMF_PORT", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.466 "hdgst": ${hdgst:-false}, 00:09:01.466 "ddgst": ${ddgst:-false} 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.466 } 00:09:01.466 EOF 00:09:01.466 )") 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3719199 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme1", 00:09:01.466 "trtype": "tcp", 00:09:01.466 "traddr": "10.0.0.2", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "4420", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.466 "hdgst": false, 00:09:01.466 "ddgst": false 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.466 }' 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme1", 00:09:01.466 "trtype": "tcp", 00:09:01.466 "traddr": "10.0.0.2", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "4420", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.466 "hdgst": false, 00:09:01.466 "ddgst": false 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.466 }' 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.466 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.466 "params": { 00:09:01.466 "name": "Nvme1", 00:09:01.466 "trtype": "tcp", 00:09:01.466 "traddr": "10.0.0.2", 00:09:01.466 "adrfam": "ipv4", 00:09:01.466 "trsvcid": "4420", 00:09:01.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.466 "hdgst": false, 00:09:01.466 "ddgst": false 00:09:01.466 }, 00:09:01.466 "method": "bdev_nvme_attach_controller" 00:09:01.467 }' 00:09:01.467 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:01.467 11:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.467 "params": { 00:09:01.467 "name": "Nvme1", 00:09:01.467 "trtype": "tcp", 00:09:01.467 "traddr": "10.0.0.2", 00:09:01.467 "adrfam": "ipv4", 00:09:01.467 "trsvcid": "4420", 00:09:01.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.467 "hdgst": false, 00:09:01.467 "ddgst": false 00:09:01.467 }, 00:09:01.467 "method": "bdev_nvme_attach_controller" 00:09:01.467 }' 00:09:01.467 [2024-11-02 11:21:01.673298] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:01.467 [2024-11-02 11:21:01.673376] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:01.467 [2024-11-02 11:21:01.673457] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:01.467 [2024-11-02 11:21:01.673457] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:01.467 [2024-11-02 11:21:01.673457] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:01.467 [2024-11-02 11:21:01.673541] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 11:21:01.673542] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 11:21:01.673541] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:01.467 --proc-type=auto ] 00:09:01.467 --proc-type=auto ] 00:09:01.467 [2024-11-02 11:21:01.864943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.725 [2024-11-02 11:21:01.908024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:01.725 [2024-11-02 11:21:01.966799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.725 [2024-11-02 11:21:02.008488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:01.725 [2024-11-02 11:21:02.064933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.725 [2024-11-02 11:21:02.109244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:01.985 [2024-11-02 11:21:02.140382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.985 [2024-11-02 11:21:02.178101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:01.985 Running I/O for 1 seconds... 00:09:01.985 Running I/O for 1 seconds... 00:09:01.985 Running I/O for 1 seconds... 00:09:01.985 Running I/O for 1 seconds... 00:09:02.923 6971.00 IOPS, 27.23 MiB/s [2024-11-02T10:21:03.325Z] 8186.00 IOPS, 31.98 MiB/s [2024-11-02T10:21:03.325Z] 9719.00 IOPS, 37.96 MiB/s 00:09:02.923 Latency(us) 00:09:02.923 [2024-11-02T10:21:03.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.923 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:02.923 Nvme1n1 : 1.01 8236.31 32.17 0.00 0.00 15462.60 8301.23 25631.86 00:09:02.923 [2024-11-02T10:21:03.325Z] =================================================================================================================== 00:09:02.923 [2024-11-02T10:21:03.325Z] Total : 8236.31 32.17 0.00 0.00 15462.60 8301.23 25631.86 00:09:02.923 00:09:02.923 Latency(us) 00:09:02.923 [2024-11-02T10:21:03.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.923 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:02.923 Nvme1n1 : 1.01 7029.76 27.46 0.00 0.00 18106.18 7524.50 30292.20 00:09:02.923 [2024-11-02T10:21:03.326Z] =================================================================================================================== 00:09:02.924 [2024-11-02T10:21:03.326Z] Total : 7029.76 27.46 0.00 0.00 18106.18 7524.50 30292.20 00:09:02.924 00:09:02.924 Latency(us) 00:09:02.924 [2024-11-02T10:21:03.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.924 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:02.924 Nvme1n1 : 1.01 9788.75 38.24 0.00 0.00 13027.21 5097.24 25049.32 00:09:02.924 [2024-11-02T10:21:03.326Z] =================================================================================================================== 00:09:02.924 [2024-11-02T10:21:03.326Z] Total : 9788.75 38.24 0.00 0.00 13027.21 5097.24 25049.32 00:09:03.182 191344.00 IOPS, 747.44 MiB/s 00:09:03.182 Latency(us) 00:09:03.182 [2024-11-02T10:21:03.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.182 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:03.182 Nvme1n1 : 1.00 190987.21 746.04 0.00 0.00 666.63 297.34 1856.85 00:09:03.182 [2024-11-02T10:21:03.584Z] =================================================================================================================== 00:09:03.182 [2024-11-02T10:21:03.584Z] Total : 190987.21 746.04 0.00 0.00 666.63 297.34 1856.85 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3719201 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3719203 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3719206 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.182 rmmod nvme_tcp 00:09:03.182 rmmod nvme_fabrics 00:09:03.182 rmmod nvme_keyring 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3719142 ']' 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3719142 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3719142 ']' 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3719142 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.182 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3719142 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3719142' 00:09:03.440 killing process with pid 3719142 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3719142 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3719142 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.440 11:21:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.976 00:09:05.976 real 0m6.924s 00:09:05.976 user 0m14.737s 00:09:05.976 sys 0m3.682s 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.976 ************************************ 00:09:05.976 END TEST nvmf_bdev_io_wait 00:09:05.976 ************************************ 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.976 ************************************ 00:09:05.976 START TEST nvmf_queue_depth 00:09:05.976 ************************************ 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.976 * Looking for test storage... 00:09:05.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.976 11:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.976 --rc genhtml_branch_coverage=1 00:09:05.976 --rc genhtml_function_coverage=1 00:09:05.976 --rc genhtml_legend=1 00:09:05.976 --rc geninfo_all_blocks=1 00:09:05.976 --rc geninfo_unexecuted_blocks=1 00:09:05.976 00:09:05.976 ' 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.976 --rc genhtml_branch_coverage=1 00:09:05.976 --rc genhtml_function_coverage=1 00:09:05.976 --rc genhtml_legend=1 00:09:05.976 --rc geninfo_all_blocks=1 00:09:05.976 --rc geninfo_unexecuted_blocks=1 00:09:05.976 00:09:05.976 ' 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.976 --rc genhtml_branch_coverage=1 00:09:05.976 --rc genhtml_function_coverage=1 00:09:05.976 --rc genhtml_legend=1 00:09:05.976 --rc geninfo_all_blocks=1 00:09:05.976 --rc geninfo_unexecuted_blocks=1 00:09:05.976 00:09:05.976 ' 00:09:05.976 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.976 --rc genhtml_branch_coverage=1 00:09:05.976 --rc genhtml_function_coverage=1 00:09:05.976 --rc genhtml_legend=1 00:09:05.976 --rc geninfo_all_blocks=1 00:09:05.976 --rc geninfo_unexecuted_blocks=1 00:09:05.976 00:09:05.976 ' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.977 11:21:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.880 11:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:07.880 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:07.880 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:07.880 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:07.880 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.880 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:09:07.881 00:09:07.881 --- 10.0.0.2 ping statistics --- 00:09:07.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.881 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:09:07.881 00:09:07.881 --- 10.0.0.1 ping statistics --- 00:09:07.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.881 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3721716 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3721716 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3721716 ']' 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.881 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:07.881 [2024-11-02 11:21:08.216037] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:07.881 [2024-11-02 11:21:08.216129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.140 [2024-11-02 11:21:08.295690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.140 [2024-11-02 11:21:08.344739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.140 [2024-11-02 11:21:08.344803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.140 [2024-11-02 11:21:08.344818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.140 [2024-11-02 11:21:08.344829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.140 [2024-11-02 11:21:08.344839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.140 [2024-11-02 11:21:08.345504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 [2024-11-02 11:21:08.494421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 Malloc0 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.399 [2024-11-02 11:21:08.543836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3721950 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3721950 /var/tmp/bdevperf.sock 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3721950 ']' 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.399 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.399 [2024-11-02 11:21:08.592781] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:08.399 [2024-11-02 11:21:08.592857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721950 ] 00:09:08.399 [2024-11-02 11:21:08.663432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.399 [2024-11-02 11:21:08.713059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.657 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:08.657 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:08.657 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:08.657 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.657 11:21:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.657 NVMe0n1 00:09:08.657 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.657 11:21:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:08.915 Running I/O for 10 seconds... 00:09:10.800 7252.00 IOPS, 28.33 MiB/s [2024-11-02T10:21:12.577Z] 7684.50 IOPS, 30.02 MiB/s [2024-11-02T10:21:13.512Z] 7837.00 IOPS, 30.61 MiB/s [2024-11-02T10:21:14.448Z] 7915.25 IOPS, 30.92 MiB/s [2024-11-02T10:21:15.383Z] 7915.80 IOPS, 30.92 MiB/s [2024-11-02T10:21:16.318Z] 7931.67 IOPS, 30.98 MiB/s [2024-11-02T10:21:17.393Z] 7965.43 IOPS, 31.11 MiB/s [2024-11-02T10:21:18.326Z] 7970.12 IOPS, 31.13 MiB/s [2024-11-02T10:21:19.260Z] 8011.44 IOPS, 31.29 MiB/s [2024-11-02T10:21:19.260Z] 8041.70 IOPS, 31.41 MiB/s 00:09:18.858 Latency(us) 00:09:18.858 [2024-11-02T10:21:19.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.858 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:18.858 Verification LBA range: start 0x0 length 0x4000 00:09:18.858 NVMe0n1 : 10.09 8062.26 31.49 0.00 0.00 126356.11 24660.95 83497.72 00:09:18.858 [2024-11-02T10:21:19.260Z] =================================================================================================================== 00:09:18.858 [2024-11-02T10:21:19.260Z] Total : 8062.26 31.49 0.00 0.00 126356.11 24660.95 83497.72 00:09:18.858 { 00:09:18.858 "results": [ 00:09:18.858 { 00:09:18.858 "job": "NVMe0n1", 00:09:18.858 "core_mask": "0x1", 00:09:18.858 "workload": "verify", 00:09:18.858 "status": "finished", 00:09:18.858 "verify_range": { 00:09:18.858 "start": 0, 00:09:18.858 "length": 16384 00:09:18.858 }, 00:09:18.858 "queue_depth": 1024, 00:09:18.858 "io_size": 4096, 00:09:18.858 "runtime": 10.089475, 00:09:18.858 "iops": 8062.262902678286, 00:09:18.858 "mibps": 31.493214463587055, 00:09:18.858 "io_failed": 0, 00:09:18.858 "io_timeout": 0, 00:09:18.858 "avg_latency_us": 126356.1131241076, 00:09:18.858 "min_latency_us": 24660.954074074074, 00:09:18.858 "max_latency_us": 83497.71851851852 00:09:18.858 } 00:09:18.858 ], 00:09:18.858 "core_count": 1 00:09:18.858 } 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3721950 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3721950 ']' 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3721950 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3721950 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3721950' 00:09:19.117 killing process with pid 3721950 00:09:19.117 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3721950 00:09:19.117 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.117 00:09:19.117 Latency(us) 00:09:19.118 [2024-11-02T10:21:19.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.118 [2024-11-02T10:21:19.520Z] =================================================================================================================== 00:09:19.118 [2024-11-02T10:21:19.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3721950 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.118 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.118 rmmod nvme_tcp 00:09:19.379 rmmod nvme_fabrics 00:09:19.379 rmmod nvme_keyring 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3721716 ']' 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3721716 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3721716 ']' 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3721716 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3721716 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:19.379 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3721716' 00:09:19.379 killing process with pid 3721716 00:09:19.380 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3721716 00:09:19.380 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3721716 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.639 11:21:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.542 00:09:21.542 real 0m15.996s 00:09:21.542 user 0m22.612s 00:09:21.542 sys 0m3.016s 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.542 ************************************ 00:09:21.542 END TEST nvmf_queue_depth 00:09:21.542 ************************************ 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.542 11:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.542 ************************************ 00:09:21.542 START TEST nvmf_target_multipath 00:09:21.542 ************************************ 00:09:21.802 11:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.802 * Looking for test storage... 00:09:21.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.802 --rc genhtml_branch_coverage=1 00:09:21.802 --rc genhtml_function_coverage=1 00:09:21.802 --rc genhtml_legend=1 00:09:21.802 --rc geninfo_all_blocks=1 00:09:21.802 --rc geninfo_unexecuted_blocks=1 00:09:21.802 00:09:21.802 ' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.802 --rc genhtml_branch_coverage=1 00:09:21.802 --rc genhtml_function_coverage=1 00:09:21.802 --rc genhtml_legend=1 00:09:21.802 --rc geninfo_all_blocks=1 00:09:21.802 --rc geninfo_unexecuted_blocks=1 00:09:21.802 00:09:21.802 ' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.802 --rc genhtml_branch_coverage=1 00:09:21.802 --rc genhtml_function_coverage=1 00:09:21.802 --rc genhtml_legend=1 00:09:21.802 --rc geninfo_all_blocks=1 00:09:21.802 --rc geninfo_unexecuted_blocks=1 00:09:21.802 00:09:21.802 ' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.802 --rc genhtml_branch_coverage=1 00:09:21.802 --rc genhtml_function_coverage=1 00:09:21.802 --rc genhtml_legend=1 00:09:21.802 --rc geninfo_all_blocks=1 00:09:21.802 --rc geninfo_unexecuted_blocks=1 00:09:21.802 00:09:21.802 ' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.802 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.803 11:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.336 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:09:24.337 00:09:24.337 --- 10.0.0.2 ping statistics --- 00:09:24.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.337 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:09:24.337 00:09:24.337 --- 10.0.0.1 ping statistics --- 00:09:24.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.337 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:24.337 only one NIC for nvmf test 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.337 rmmod nvme_tcp 00:09:24.337 rmmod nvme_fabrics 00:09:24.337 rmmod nvme_keyring 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.337 11:21:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.238 00:09:26.238 real 0m4.677s 00:09:26.238 user 0m0.978s 00:09:26.238 sys 0m1.715s 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.238 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 ************************************ 00:09:26.238 END TEST nvmf_target_multipath 00:09:26.238 ************************************ 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.497 ************************************ 00:09:26.497 START TEST nvmf_zcopy 00:09:26.497 ************************************ 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.497 * Looking for test storage... 00:09:26.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:26.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.497 --rc genhtml_branch_coverage=1 00:09:26.497 --rc genhtml_function_coverage=1 00:09:26.497 --rc genhtml_legend=1 00:09:26.497 --rc geninfo_all_blocks=1 00:09:26.497 --rc geninfo_unexecuted_blocks=1 00:09:26.497 00:09:26.497 ' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:26.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.497 --rc genhtml_branch_coverage=1 00:09:26.497 --rc genhtml_function_coverage=1 00:09:26.497 --rc genhtml_legend=1 00:09:26.497 --rc geninfo_all_blocks=1 00:09:26.497 --rc geninfo_unexecuted_blocks=1 00:09:26.497 00:09:26.497 ' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:26.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.497 --rc genhtml_branch_coverage=1 00:09:26.497 --rc genhtml_function_coverage=1 00:09:26.497 --rc genhtml_legend=1 00:09:26.497 --rc geninfo_all_blocks=1 00:09:26.497 --rc geninfo_unexecuted_blocks=1 00:09:26.497 00:09:26.497 ' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:26.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.497 --rc genhtml_branch_coverage=1 00:09:26.497 --rc genhtml_function_coverage=1 00:09:26.497 --rc genhtml_legend=1 00:09:26.497 --rc geninfo_all_blocks=1 00:09:26.497 --rc geninfo_unexecuted_blocks=1 00:09:26.497 00:09:26.497 ' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.497 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.498 11:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:29.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:29.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:29.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:29.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.029 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.030 11:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:09:29.030 00:09:29.030 --- 10.0.0.2 ping statistics --- 00:09:29.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.030 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:09:29.030 00:09:29.030 --- 10.0.0.1 ping statistics --- 00:09:29.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.030 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3727308 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3727308 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3727308 ']' 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.030 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 [2024-11-02 11:21:29.222520] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:29.030 [2024-11-02 11:21:29.222641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.030 [2024-11-02 11:21:29.295357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.030 [2024-11-02 11:21:29.342265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.030 [2024-11-02 11:21:29.342331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.030 [2024-11-02 11:21:29.342344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.030 [2024-11-02 11:21:29.342355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.030 [2024-11-02 11:21:29.342365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.030 [2024-11-02 11:21:29.343028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 [2024-11-02 11:21:29.494029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 [2024-11-02 11:21:29.510270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 malloc0 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.289 { 00:09:29.289 "params": { 00:09:29.289 "name": "Nvme$subsystem", 00:09:29.289 "trtype": "$TEST_TRANSPORT", 00:09:29.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.289 "adrfam": "ipv4", 00:09:29.289 "trsvcid": "$NVMF_PORT", 00:09:29.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.289 "hdgst": ${hdgst:-false}, 00:09:29.289 "ddgst": ${ddgst:-false} 00:09:29.289 }, 00:09:29.289 "method": "bdev_nvme_attach_controller" 00:09:29.289 } 00:09:29.289 EOF 00:09:29.289 )") 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:29.289 11:21:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.289 "params": { 00:09:29.289 "name": "Nvme1", 00:09:29.289 "trtype": "tcp", 00:09:29.289 "traddr": "10.0.0.2", 00:09:29.289 "adrfam": "ipv4", 00:09:29.289 "trsvcid": "4420", 00:09:29.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.289 "hdgst": false, 00:09:29.289 "ddgst": false 00:09:29.289 }, 00:09:29.289 "method": "bdev_nvme_attach_controller" 00:09:29.289 }' 00:09:29.289 [2024-11-02 11:21:29.594762] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:29.289 [2024-11-02 11:21:29.594828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727329 ] 00:09:29.289 [2024-11-02 11:21:29.666330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.547 [2024-11-02 11:21:29.718671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.547 Running I/O for 10 seconds... 00:09:31.856 5358.00 IOPS, 41.86 MiB/s [2024-11-02T10:21:33.192Z] 5394.00 IOPS, 42.14 MiB/s [2024-11-02T10:21:34.126Z] 5421.33 IOPS, 42.35 MiB/s [2024-11-02T10:21:35.061Z] 5452.25 IOPS, 42.60 MiB/s [2024-11-02T10:21:35.995Z] 5446.60 IOPS, 42.55 MiB/s [2024-11-02T10:21:37.370Z] 5452.50 IOPS, 42.60 MiB/s [2024-11-02T10:21:37.936Z] 5447.71 IOPS, 42.56 MiB/s [2024-11-02T10:21:39.310Z] 5451.25 IOPS, 42.59 MiB/s [2024-11-02T10:21:40.244Z] 5454.44 IOPS, 42.61 MiB/s [2024-11-02T10:21:40.244Z] 5457.40 IOPS, 42.64 MiB/s 00:09:39.842 Latency(us) 00:09:39.842 [2024-11-02T10:21:40.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.842 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:39.842 Verification LBA range: start 0x0 length 0x1000 00:09:39.842 Nvme1n1 : 10.02 5459.94 42.66 0.00 0.00 23378.36 3640.89 32039.82 00:09:39.842 [2024-11-02T10:21:40.244Z] =================================================================================================================== 00:09:39.842 [2024-11-02T10:21:40.244Z] Total : 5459.94 42.66 0.00 0.00 23378.36 3640.89 32039.82 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3728548 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.842 { 00:09:39.842 "params": { 00:09:39.842 "name": "Nvme$subsystem", 00:09:39.842 "trtype": "$TEST_TRANSPORT", 00:09:39.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.842 "adrfam": "ipv4", 00:09:39.842 "trsvcid": "$NVMF_PORT", 00:09:39.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.842 "hdgst": ${hdgst:-false}, 00:09:39.842 "ddgst": ${ddgst:-false} 00:09:39.842 }, 00:09:39.842 "method": "bdev_nvme_attach_controller" 00:09:39.842 } 00:09:39.842 EOF 00:09:39.842 )") 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:39.842 [2024-11-02 11:21:40.165452] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.165495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:39.842 11:21:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.842 "params": { 00:09:39.842 "name": "Nvme1", 00:09:39.842 "trtype": "tcp", 00:09:39.842 "traddr": "10.0.0.2", 00:09:39.842 "adrfam": "ipv4", 00:09:39.842 "trsvcid": "4420", 00:09:39.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.842 "hdgst": false, 00:09:39.842 "ddgst": false 00:09:39.842 }, 00:09:39.842 "method": "bdev_nvme_attach_controller" 00:09:39.842 }' 00:09:39.842 [2024-11-02 11:21:40.173413] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.173438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.181434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.181457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.189465] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.189490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.197483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.197508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.205500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.205523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.211071] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:39.842 [2024-11-02 11:21:40.211163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728548 ] 00:09:39.842 [2024-11-02 11:21:40.213521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.213570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.221557] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.221593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.229583] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.229609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.842 [2024-11-02 11:21:40.237592] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.842 [2024-11-02 11:21:40.237617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.245623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.245653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.253679] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.253707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.261674] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.261699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.269695] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.269719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.277722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.277748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.285739] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.285764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.289011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.101 [2024-11-02 11:21:40.293769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.293796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.301821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.301860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.309823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.309853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.317829] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.317853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.325850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.325874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.333893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.333918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.341891] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.341915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.343758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.101 [2024-11-02 11:21:40.349912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.349936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.357939] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.357966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.365981] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.366027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.374004] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.374039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.382026] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.382063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.390053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.390092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.398071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.398108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.406094] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.406133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.414090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.414115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.422130] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.422164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.430165] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.430202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.438187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.438228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.446179] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.446204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.454201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.454225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.462221] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.462245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.470254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.470319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.478282] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.478332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.486315] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.486338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.494336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.494359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.101 [2024-11-02 11:21:40.502383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.101 [2024-11-02 11:21:40.502408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.510386] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.510419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.518394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.518423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.526414] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.526435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.534443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.534467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.542468] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.542495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.550484] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.550507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.558502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.558523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.566536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.566577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.574562] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.574583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 Running I/O for 5 seconds... 00:09:40.360 [2024-11-02 11:21:40.582586] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.582625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.598827] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.598860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.610716] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.610749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.622710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.622742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.634718] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.634750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.646720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.646752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.658741] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.658772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.671189] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.671220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.684406] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.684434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.695152] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.695183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.706527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.706572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.718019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.718050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.729250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.729291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.740474] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.740502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.360 [2024-11-02 11:21:40.752121] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.360 [2024-11-02 11:21:40.752152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.763344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.763373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.774963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.774991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.788041] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.788072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.798400] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.798428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.809755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.809787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.821147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.821178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.832520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.832548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.844148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.844180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.855740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.855771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.867263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.867294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.878690] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.878721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.890049] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.890079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.901510] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.901537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.913074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.913103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.924095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.924140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.935479] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.935508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.947622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.947654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.959053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.959084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.970608] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.970638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.982066] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.982096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:40.993245] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:40.993288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:41.006857] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:41.006888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.619 [2024-11-02 11:21:41.017450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.619 [2024-11-02 11:21:41.017482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.029197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.029229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.040367] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.040395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.051401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.051429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.062489] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.062517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.075792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.075823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.086481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.086509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.097921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.097952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.109665] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.109696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.121403] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.121431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.132791] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.132822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.146206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.146238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.156590] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.156621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.168321] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.168349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.180326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.180354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.191960] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.191991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.203509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.203537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.216849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.216879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.227323] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.227351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.238761] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.238793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.250625] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.250657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.262126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.262157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.877 [2024-11-02 11:21:41.273369] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.877 [2024-11-02 11:21:41.273397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.285532] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.285562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.296671] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.296702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.308210] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.308241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.319370] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.319402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.332609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.332640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.343249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.343291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.354578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.354622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.366075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.366106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.379552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.379590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.390277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.390326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.401889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.401919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.414960] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.414991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.425966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.425996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.437360] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.437388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.448801] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.448831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.459958] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.460001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.471312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.471340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.482728] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.482756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.493537] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.493580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.506751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.506778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.517613] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.517641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.135 [2024-11-02 11:21:41.528442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.135 [2024-11-02 11:21:41.528470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.393 [2024-11-02 11:21:41.540349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.540377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.551650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.551682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.562959] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.562990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.574372] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.574401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 11018.00 IOPS, 86.08 MiB/s [2024-11-02T10:21:41.796Z] [2024-11-02 11:21:41.586246] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.586312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.597418] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.597446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.608126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.608157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.619647] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.619679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.631085] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.631115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.642644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.642671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.653432] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.653460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.665150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.665180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.677047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.677077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.688550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.688595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.699987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.700017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.711754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.711784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.723224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.723263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.735001] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.735032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.746182] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.746214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.759124] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.759155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.769460] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.769488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.781463] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.781493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.394 [2024-11-02 11:21:41.792444] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.394 [2024-11-02 11:21:41.792473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.804523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.804575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.816207] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.816238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.827285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.827330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.838643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.838675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.850446] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.850474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.861648] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.861680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.875030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.875061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.885433] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.885460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.896949] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.896980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.910456] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.910484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.921808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.921838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.933855] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.933885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.946016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.946047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.957877] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.957908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.969416] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.969444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.981263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.981312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:41.992970] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:41.993001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:42.004450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:42.004478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:42.016226] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:42.016269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:42.027818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:42.027856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:42.039859] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:42.039889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.653 [2024-11-02 11:21:42.051792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.653 [2024-11-02 11:21:42.051824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.064642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.064674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.076478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.076506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.089939] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.089970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.101112] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.101144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.112861] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.112892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.124427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.124455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.135962] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.135993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.147526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.147553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.159527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.159554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.171635] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.171666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.183158] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.183189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.195170] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.195200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.207228] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.207267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.219470] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.219497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.231482] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.231510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.245329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.245357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.256987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.257017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.269249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.269289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.283358] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.283386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.294321] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.294348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.911 [2024-11-02 11:21:42.306629] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.911 [2024-11-02 11:21:42.306660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.318785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.318816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.330420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.330448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.343745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.343776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.354935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.354965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.366450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.366479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.377751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.377782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.389564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.389609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.401455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.401484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.412893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.412925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.424490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.424518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.436622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.436654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.448500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.448529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.459908] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.459938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.471941] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.471975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.484398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.484427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.496254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.496308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.507711] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.507742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.169 [2024-11-02 11:21:42.519070] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.169 [2024-11-02 11:21:42.519101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.170 [2024-11-02 11:21:42.531146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.170 [2024-11-02 11:21:42.531177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.170 [2024-11-02 11:21:42.542266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.170 [2024-11-02 11:21:42.542312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.170 [2024-11-02 11:21:42.553620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.170 [2024-11-02 11:21:42.553651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.170 [2024-11-02 11:21:42.565314] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.170 [2024-11-02 11:21:42.565341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.577653] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.577684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 10955.50 IOPS, 85.59 MiB/s [2024-11-02T10:21:42.830Z] [2024-11-02 11:21:42.589556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.589588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.602705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.602736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.612825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.612856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.624097] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.624128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.637433] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.637460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.647832] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.647864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.658991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.659022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.670275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.670320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.682061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.682093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.693774] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.693814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.705326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.705354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.716515] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.716568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.729817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.729848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.740104] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.740135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.751332] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.751359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.764501] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.764530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.774876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.774907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.786750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.786781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.798407] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.798436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.809688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.809719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.428 [2024-11-02 11:21:42.820980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.428 [2024-11-02 11:21:42.821011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.832678] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.832713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.843734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.843765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.855294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.855338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.866466] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.866494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.877763] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.877794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.890937] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.890969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.901131] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.901162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.913118] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.913158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.924591] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.924622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.936132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.936163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.947484] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.947512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.958698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.958729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.969829] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.969860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.981389] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.981422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:42.992701] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:42.992732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.003801] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.003832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.015321] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.015349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.028985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.029016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.040408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.040436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.052167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.052197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.064012] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.064042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.075941] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.075972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.687 [2024-11-02 11:21:43.087858] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.687 [2024-11-02 11:21:43.087889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.099585] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.099617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.111136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.111167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.123054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.123085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.134704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.134743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.146075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.146106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.159324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.159352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.170707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.170738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.181798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.181828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.194892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.194925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.205356] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.205384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.216441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.216470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.229846] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.229877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.240525] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.240553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.252151] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.252182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.262847] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.262878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.274107] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.274137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.287785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.287816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.298284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.945 [2024-11-02 11:21:43.298330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.945 [2024-11-02 11:21:43.310132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.946 [2024-11-02 11:21:43.310163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.946 [2024-11-02 11:21:43.321673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.946 [2024-11-02 11:21:43.321704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.946 [2024-11-02 11:21:43.333105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.946 [2024-11-02 11:21:43.333136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.946 [2024-11-02 11:21:43.344509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.946 [2024-11-02 11:21:43.344552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.356401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.356438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.367882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.367913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.378589] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.378621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.389353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.389381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.400443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.400471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.411630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.411661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.424938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.424968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.436048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.436078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.447533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.447576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.461119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.461151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.472375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.472402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.483380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.483408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.494756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.494786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.506064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.506094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.204 [2024-11-02 11:21:43.517532] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.204 [2024-11-02 11:21:43.517560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 [2024-11-02 11:21:43.529122] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.529153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 [2024-11-02 11:21:43.540499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.540533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 [2024-11-02 11:21:43.551850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.551882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 [2024-11-02 11:21:43.563607] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.563639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 [2024-11-02 11:21:43.576810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.576849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 [2024-11-02 11:21:43.587488] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.587515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.205 11029.33 IOPS, 86.17 MiB/s [2024-11-02T10:21:43.607Z] [2024-11-02 11:21:43.599333] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.205 [2024-11-02 11:21:43.599361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.611531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.611560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.622651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.622690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.634625] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.634656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.646409] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.646437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.657560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.657591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.668777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.668809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.680420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.680455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.692032] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.692063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.705151] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.705195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.463 [2024-11-02 11:21:43.715271] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.463 [2024-11-02 11:21:43.715318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.726624] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.726654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.738226] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.738264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.749687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.749718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.760813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.760840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.771796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.771829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.785213] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.785244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.795672] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.795703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.806825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.806856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.819861] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.819889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.830209] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.830240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.842329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.842358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.464 [2024-11-02 11:21:43.854073] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.464 [2024-11-02 11:21:43.854105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.721 [2024-11-02 11:21:43.865779] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.721 [2024-11-02 11:21:43.865817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.721 [2024-11-02 11:21:43.877443] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.721 [2024-11-02 11:21:43.877472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.721 [2024-11-02 11:21:43.889065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.721 [2024-11-02 11:21:43.889096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.721 [2024-11-02 11:21:43.900167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.721 [2024-11-02 11:21:43.900194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.721 [2024-11-02 11:21:43.911838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.721 [2024-11-02 11:21:43.911870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.923238] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.923282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.935088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.935118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.946715] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.946746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.957817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.957848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.968818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.968849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.981911] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.981943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:43.992712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:43.992743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.004422] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.004450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.016320] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.016348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.028081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.028112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.039604] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.039635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.050430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.050458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.061441] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.061469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.073205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.073235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.084490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.084518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.095769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.095799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.107298] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.107342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.722 [2024-11-02 11:21:44.118405] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.722 [2024-11-02 11:21:44.118433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.979 [2024-11-02 11:21:44.129927] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.979 [2024-11-02 11:21:44.129958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.141110] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.141141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.152490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.152517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.164411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.164439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.176208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.176238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.189844] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.189875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.200980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.201011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.212575] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.212619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.224051] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.224091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.237159] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.237191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.247736] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.247768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.259374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.259402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.271078] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.271108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.282798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.282829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.294136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.294163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.305348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.305375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.316741] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.316773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.328146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.328177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.339508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.339539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.350569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.350597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.361703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.361734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.980 [2024-11-02 11:21:44.373187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.980 [2024-11-02 11:21:44.373218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.384874] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.384902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.397960] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.397991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.407918] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.407949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.420038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.420069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.431103] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.431134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.444520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.444579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.455060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.455090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.466452] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.466480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.478029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.478059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.489909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.238 [2024-11-02 11:21:44.489941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.238 [2024-11-02 11:21:44.501709] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.501740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.513951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.513983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.525714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.525745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.537224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.537264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.550783] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.550814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.561232] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.561267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.573410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.573438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.584828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.584858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 11045.50 IOPS, 86.29 MiB/s [2024-11-02T10:21:44.641Z] [2024-11-02 11:21:44.596147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.596177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.607379] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.607407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.618183] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.618214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.239 [2024-11-02 11:21:44.631584] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.239 [2024-11-02 11:21:44.631611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.642649] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.642680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.654231] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.654274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.665286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.665343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.676090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.676121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.687413] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.687442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.698841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.698873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.710197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.710228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.721675] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.721706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.732887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.732918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.744386] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.744414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.755903] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.755934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.769432] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.769459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.780126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.780158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.791504] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.791532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.803153] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.803182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.816395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.816423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.827563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.827595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.838974] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.839005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.850027] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.850057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.861529] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.861576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.873038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.873069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.884661] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.884692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.497 [2024-11-02 11:21:44.895637] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.497 [2024-11-02 11:21:44.895668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.907180] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.907211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.918471] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.918499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.930071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.930102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.941107] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.941138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.952449] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.952476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.964075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.964106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.975617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.975649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.986876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.986907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:44.998201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:44.998232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.009435] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.009464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.021189] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.021221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.032038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.032066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.043611] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.043642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.054897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.054925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.066045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.066077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.077034] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.077065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.087781] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.087812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.099174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.099204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.110450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.110478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.121813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.121843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.132828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.132859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.143804] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.143835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.755 [2024-11-02 11:21:45.155150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.755 [2024-11-02 11:21:45.155181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.166961] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.166992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.178117] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.178148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.189664] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.189695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.201165] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.201195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.212472] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.212500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.226170] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.226201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.237124] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.237155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.248510] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.248538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.261707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.261739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.273705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.273736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.283589] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.283620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.295694] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.295725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.307266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.307313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.318760] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.318790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.330803] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.330835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.342693] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.342724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.353876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.353906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.364701] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.364732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.375961] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.375991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.387129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.387159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.398852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.398884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.014 [2024-11-02 11:21:45.409816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.014 [2024-11-02 11:21:45.409847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.421491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.421519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.432438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.432466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.444085] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.444116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.455829] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.455859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.467016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.467047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.478828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.478859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.490183] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.490214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.501408] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.501435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.513047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.513077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.524985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.525016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.537491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.537519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.548977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.549007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.560334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.560362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.572161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.572191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.583193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.583225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 11078.00 IOPS, 86.55 MiB/s [2024-11-02T10:21:45.675Z] [2024-11-02 11:21:45.595979] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.596009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.603869] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.603899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 00:09:45.273 Latency(us) 00:09:45.273 [2024-11-02T10:21:45.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.273 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:45.273 Nvme1n1 : 5.01 11078.95 86.55 0.00 0.00 11537.88 4878.79 22816.24 00:09:45.273 [2024-11-02T10:21:45.675Z] =================================================================================================================== 00:09:45.273 [2024-11-02T10:21:45.675Z] Total : 11078.95 86.55 0.00 0.00 11537.88 4878.79 22816.24 00:09:45.273 [2024-11-02 11:21:45.611893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.611921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.619896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.619922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.627972] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.628026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.636000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.636055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.644008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.644056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.656072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.656134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.664064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.664113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.273 [2024-11-02 11:21:45.672098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.273 [2024-11-02 11:21:45.672153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.680115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.680180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.688128] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.688178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.696158] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.696211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.704188] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.704241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.712199] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.712251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.720211] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.720265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.728232] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.728301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.736266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.736314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.744283] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.744331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.752272] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.752300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.760297] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.760325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.768351] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.768398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.776376] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.776425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.784388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.784424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.792375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.792396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.800391] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.800412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 [2024-11-02 11:21:45.808410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.532 [2024-11-02 11:21:45.808431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3728548) - No such process 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3728548 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.532 delay0 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.532 11:21:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:45.790 [2024-11-02 11:21:45.943415] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.900 Initializing NVMe Controllers 00:09:53.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.900 Initialization complete. Launching workers. 00:09:53.900 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 13934 00:09:53.900 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14093, failed to submit 104 00:09:53.900 success 13995, unsuccessful 98, failed 0 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.900 rmmod nvme_tcp 00:09:53.900 rmmod nvme_fabrics 00:09:53.900 rmmod nvme_keyring 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3727308 ']' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3727308 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3727308 ']' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3727308 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3727308 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3727308' 00:09:53.900 killing process with pid 3727308 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3727308 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3727308 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.900 11:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.276 00:09:55.276 real 0m28.817s 00:09:55.276 user 0m40.832s 00:09:55.276 sys 0m9.834s 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.276 ************************************ 00:09:55.276 END TEST nvmf_zcopy 00:09:55.276 ************************************ 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.276 ************************************ 00:09:55.276 START TEST nvmf_nmic 00:09:55.276 ************************************ 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.276 * Looking for test storage... 00:09:55.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.276 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.535 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.535 --rc genhtml_branch_coverage=1 00:09:55.535 --rc genhtml_function_coverage=1 00:09:55.536 --rc genhtml_legend=1 00:09:55.536 --rc geninfo_all_blocks=1 00:09:55.536 --rc geninfo_unexecuted_blocks=1 00:09:55.536 00:09:55.536 ' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.536 --rc genhtml_branch_coverage=1 00:09:55.536 --rc genhtml_function_coverage=1 00:09:55.536 --rc genhtml_legend=1 00:09:55.536 --rc geninfo_all_blocks=1 00:09:55.536 --rc geninfo_unexecuted_blocks=1 00:09:55.536 00:09:55.536 ' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.536 --rc genhtml_branch_coverage=1 00:09:55.536 --rc genhtml_function_coverage=1 00:09:55.536 --rc genhtml_legend=1 00:09:55.536 --rc geninfo_all_blocks=1 00:09:55.536 --rc geninfo_unexecuted_blocks=1 00:09:55.536 00:09:55.536 ' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.536 --rc genhtml_branch_coverage=1 00:09:55.536 --rc genhtml_function_coverage=1 00:09:55.536 --rc genhtml_legend=1 00:09:55.536 --rc geninfo_all_blocks=1 00:09:55.536 --rc geninfo_unexecuted_blocks=1 00:09:55.536 00:09:55.536 ' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.536 11:21:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:57.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:57.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:57.438 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:57.438 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.438 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:09:57.439 00:09:57.439 --- 10.0.0.2 ping statistics --- 00:09:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.439 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:57.439 00:09:57.439 --- 10.0.0.1 ping statistics --- 00:09:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.439 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3732055 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3732055 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3732055 ']' 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.439 11:21:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.439 [2024-11-02 11:21:57.823521] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:09:57.439 [2024-11-02 11:21:57.823619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.697 [2024-11-02 11:21:57.902106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.697 [2024-11-02 11:21:57.952491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.697 [2024-11-02 11:21:57.952556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.697 [2024-11-02 11:21:57.952582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.697 [2024-11-02 11:21:57.952611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.697 [2024-11-02 11:21:57.952630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.697 [2024-11-02 11:21:57.954361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.697 [2024-11-02 11:21:57.954432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.697 [2024-11-02 11:21:57.954531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.697 [2024-11-02 11:21:57.954535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.697 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 [2024-11-02 11:21:58.102157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 Malloc0 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 [2024-11-02 11:21:58.172188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:57.955 test case1: single bdev can't be used in multiple subsystems 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 [2024-11-02 11:21:58.196011] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:57.955 [2024-11-02 11:21:58.196042] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:57.955 [2024-11-02 11:21:58.196065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.955 request: 00:09:57.955 { 00:09:57.955 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:57.955 "namespace": { 00:09:57.955 "bdev_name": "Malloc0", 00:09:57.955 "no_auto_visible": false 00:09:57.955 }, 00:09:57.955 "method": "nvmf_subsystem_add_ns", 00:09:57.955 "req_id": 1 00:09:57.955 } 00:09:57.955 Got JSON-RPC error response 00:09:57.955 response: 00:09:57.955 { 00:09:57.955 "code": -32602, 00:09:57.955 "message": "Invalid parameters" 00:09:57.955 } 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:57.955 Adding namespace failed - expected result. 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:57.955 test case2: host connect to nvmf target in multiple paths 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.955 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.955 [2024-11-02 11:21:58.204134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:57.956 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.956 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.521 11:21:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:59.465 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.465 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:59.465 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.465 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:59.465 11:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:01.362 11:22:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.362 [global] 00:10:01.362 thread=1 00:10:01.362 invalidate=1 00:10:01.362 rw=write 00:10:01.362 time_based=1 00:10:01.362 runtime=1 00:10:01.362 ioengine=libaio 00:10:01.362 direct=1 00:10:01.362 bs=4096 00:10:01.362 iodepth=1 00:10:01.362 norandommap=0 00:10:01.362 numjobs=1 00:10:01.362 00:10:01.362 verify_dump=1 00:10:01.362 verify_backlog=512 00:10:01.362 verify_state_save=0 00:10:01.362 do_verify=1 00:10:01.362 verify=crc32c-intel 00:10:01.362 [job0] 00:10:01.362 filename=/dev/nvme0n1 00:10:01.362 Could not set queue depth (nvme0n1) 00:10:01.362 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.362 fio-3.35 00:10:01.362 Starting 1 thread 00:10:02.735 00:10:02.735 job0: (groupid=0, jobs=1): err= 0: pid=3732575: Sat Nov 2 11:22:02 2024 00:10:02.735 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:10:02.735 slat (nsec): min=8209, max=33956, avg=23914.73, stdev=9289.27 00:10:02.735 clat (usec): min=40909, max=41357, avg=40983.14, stdev=88.38 00:10:02.735 lat (usec): min=40942, max=41365, avg=41007.05, stdev=83.75 00:10:02.735 clat percentiles (usec): 00:10:02.735 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:02.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:02.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:02.735 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:02.735 | 99.99th=[41157] 00:10:02.735 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:10:02.735 slat (usec): min=7, max=28825, avg=66.01, stdev=1273.50 00:10:02.735 clat (usec): min=156, max=302, avg=175.84, stdev=14.40 00:10:02.735 lat (usec): min=164, max=29118, avg=241.84, stdev=1278.75 00:10:02.735 clat percentiles (usec): 00:10:02.735 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 161], 20.00th=[ 165], 00:10:02.735 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:10:02.735 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:10:02.735 | 99.00th=[ 223], 99.50th=[ 293], 99.90th=[ 302], 99.95th=[ 302], 00:10:02.735 | 99.99th=[ 302] 00:10:02.735 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.735 lat (usec) : 250=95.32%, 500=0.56% 00:10:02.735 lat (msec) : 50=4.12% 00:10:02.735 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:10:02.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.735 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.735 00:10:02.735 Run status group 0 (all jobs): 00:10:02.735 READ: bw=85.6KiB/s (87.7kB/s), 85.6KiB/s-85.6KiB/s (87.7kB/s-87.7kB/s), io=88.0KiB (90.1kB), run=1028-1028msec 00:10:02.735 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:10:02.735 00:10:02.735 Disk stats (read/write): 00:10:02.735 nvme0n1: ios=44/512, merge=0/0, ticks=1724/83, in_queue=1807, util=98.60% 00:10:02.735 11:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.735 rmmod nvme_tcp 00:10:02.735 rmmod nvme_fabrics 00:10:02.735 rmmod nvme_keyring 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3732055 ']' 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3732055 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3732055 ']' 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3732055 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3732055 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3732055' 00:10:02.735 killing process with pid 3732055 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3732055 00:10:02.735 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3732055 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.994 11:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.529 00:10:05.529 real 0m9.876s 00:10:05.529 user 0m22.517s 00:10:05.529 sys 0m2.286s 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.529 ************************************ 00:10:05.529 END TEST nvmf_nmic 00:10:05.529 ************************************ 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.529 ************************************ 00:10:05.529 START TEST nvmf_fio_target 00:10:05.529 ************************************ 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:05.529 * Looking for test storage... 00:10:05.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.529 --rc genhtml_branch_coverage=1 00:10:05.529 --rc genhtml_function_coverage=1 00:10:05.529 --rc genhtml_legend=1 00:10:05.529 --rc geninfo_all_blocks=1 00:10:05.529 --rc geninfo_unexecuted_blocks=1 00:10:05.529 00:10:05.529 ' 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.529 --rc genhtml_branch_coverage=1 00:10:05.529 --rc genhtml_function_coverage=1 00:10:05.529 --rc genhtml_legend=1 00:10:05.529 --rc geninfo_all_blocks=1 00:10:05.529 --rc geninfo_unexecuted_blocks=1 00:10:05.529 00:10:05.529 ' 00:10:05.529 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.529 --rc genhtml_branch_coverage=1 00:10:05.529 --rc genhtml_function_coverage=1 00:10:05.529 --rc genhtml_legend=1 00:10:05.529 --rc geninfo_all_blocks=1 00:10:05.529 --rc geninfo_unexecuted_blocks=1 00:10:05.529 00:10:05.529 ' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.530 --rc genhtml_branch_coverage=1 00:10:05.530 --rc genhtml_function_coverage=1 00:10:05.530 --rc genhtml_legend=1 00:10:05.530 --rc geninfo_all_blocks=1 00:10:05.530 --rc geninfo_unexecuted_blocks=1 00:10:05.530 00:10:05.530 ' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.530 11:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.433 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:10:07.434 00:10:07.434 --- 10.0.0.2 ping statistics --- 00:10:07.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.434 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:10:07.434 00:10:07.434 --- 10.0.0.1 ping statistics --- 00:10:07.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.434 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3734725 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3734725 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3734725 ']' 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:07.434 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.434 [2024-11-02 11:22:07.691447] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:10:07.434 [2024-11-02 11:22:07.691522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.434 [2024-11-02 11:22:07.763040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.434 [2024-11-02 11:22:07.808687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.434 [2024-11-02 11:22:07.808741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.434 [2024-11-02 11:22:07.808763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.434 [2024-11-02 11:22:07.808794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.434 [2024-11-02 11:22:07.808808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.434 [2024-11-02 11:22:07.810429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.434 [2024-11-02 11:22:07.810478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.434 [2024-11-02 11:22:07.810539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.434 [2024-11-02 11:22:07.810542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.693 11:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.950 [2024-11-02 11:22:08.198438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.950 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.208 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:08.208 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.810 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:08.811 11:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.106 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:09.106 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.106 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:09.106 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:09.364 11:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.930 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:09.930 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.188 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:10.188 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.446 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:10.446 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:10.703 11:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:10.960 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:10.960 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.218 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:11.218 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.476 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.733 [2024-11-02 11:22:11.978905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.733 11:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:11.991 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:12.248 11:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.183 11:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:13.183 11:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:13.183 11:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.183 11:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:13.183 11:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:13.183 11:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:15.081 11:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:15.081 [global] 00:10:15.081 thread=1 00:10:15.081 invalidate=1 00:10:15.081 rw=write 00:10:15.081 time_based=1 00:10:15.081 runtime=1 00:10:15.081 ioengine=libaio 00:10:15.081 direct=1 00:10:15.081 bs=4096 00:10:15.081 iodepth=1 00:10:15.081 norandommap=0 00:10:15.081 numjobs=1 00:10:15.081 00:10:15.081 verify_dump=1 00:10:15.081 verify_backlog=512 00:10:15.081 verify_state_save=0 00:10:15.081 do_verify=1 00:10:15.081 verify=crc32c-intel 00:10:15.081 [job0] 00:10:15.081 filename=/dev/nvme0n1 00:10:15.081 [job1] 00:10:15.081 filename=/dev/nvme0n2 00:10:15.081 [job2] 00:10:15.081 filename=/dev/nvme0n3 00:10:15.081 [job3] 00:10:15.081 filename=/dev/nvme0n4 00:10:15.081 Could not set queue depth (nvme0n1) 00:10:15.081 Could not set queue depth (nvme0n2) 00:10:15.081 Could not set queue depth (nvme0n3) 00:10:15.081 Could not set queue depth (nvme0n4) 00:10:15.338 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.338 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.338 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.338 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.338 fio-3.35 00:10:15.338 Starting 4 threads 00:10:16.711 00:10:16.711 job0: (groupid=0, jobs=1): err= 0: pid=3735818: Sat Nov 2 11:22:16 2024 00:10:16.711 read: IOPS=1240, BW=4963KiB/s (5082kB/s)(4968KiB/1001msec) 00:10:16.711 slat (nsec): min=4967, max=72273, avg=20039.03, stdev=10936.71 00:10:16.711 clat (usec): min=342, max=713, avg=431.80, stdev=60.22 00:10:16.711 lat (usec): min=355, max=729, avg=451.84, stdev=63.68 00:10:16.711 clat percentiles (usec): 00:10:16.711 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 371], 00:10:16.711 | 30.00th=[ 383], 40.00th=[ 404], 50.00th=[ 429], 60.00th=[ 445], 00:10:16.711 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:10:16.711 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 717], 00:10:16.711 | 99.99th=[ 717] 00:10:16.711 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:16.711 slat (nsec): min=6741, max=84714, avg=17813.16, stdev=8538.36 00:10:16.712 clat (usec): min=191, max=572, avg=258.92, stdev=53.99 00:10:16.712 lat (usec): min=203, max=593, avg=276.73, stdev=56.82 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:10:16.712 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 251], 00:10:16.712 | 70.00th=[ 265], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 383], 00:10:16.712 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 490], 99.95th=[ 570], 00:10:16.712 | 99.99th=[ 570] 00:10:16.712 bw ( KiB/s): min= 6784, max= 6784, per=37.14%, avg=6784.00, stdev= 0.00, samples=1 00:10:16.712 iops : min= 1696, max= 1696, avg=1696.00, stdev= 0.00, samples=1 00:10:16.712 lat (usec) : 250=32.40%, 500=60.66%, 750=6.95% 00:10:16.712 cpu : usr=2.30%, sys=5.70%, ctx=2780, majf=0, minf=1 00:10:16.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 issued rwts: total=1242,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.712 job1: (groupid=0, jobs=1): err= 0: pid=3735839: Sat Nov 2 11:22:16 2024 00:10:16.712 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:10:16.712 slat (nsec): min=6383, max=35383, avg=19896.36, stdev=8691.75 00:10:16.712 clat (usec): min=413, max=42000, avg=39380.60, stdev=8713.90 00:10:16.712 lat (usec): min=434, max=42019, avg=39400.50, stdev=8713.63 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 412], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:16.712 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:16.712 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:16.712 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:16.712 | 99.99th=[42206] 00:10:16.712 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:16.712 slat (nsec): min=7017, max=77650, avg=16382.83, stdev=7658.72 00:10:16.712 clat (usec): min=195, max=782, avg=254.08, stdev=60.13 00:10:16.712 lat (usec): min=204, max=808, avg=270.46, stdev=60.72 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:10:16.712 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:10:16.712 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 322], 00:10:16.712 | 99.00th=[ 486], 99.50th=[ 783], 99.90th=[ 783], 99.95th=[ 783], 00:10:16.712 | 99.99th=[ 783] 00:10:16.712 bw ( KiB/s): min= 4096, max= 4096, per=22.42%, avg=4096.00, stdev= 0.00, samples=1 00:10:16.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:16.712 lat (usec) : 250=55.81%, 500=39.33%, 750=0.37%, 1000=0.56% 00:10:16.712 lat (msec) : 50=3.93% 00:10:16.712 cpu : usr=0.40%, sys=0.79%, ctx=535, majf=0, minf=1 00:10:16.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.712 job2: (groupid=0, jobs=1): err= 0: pid=3735863: Sat Nov 2 11:22:16 2024 00:10:16.712 read: IOPS=1401, BW=5606KiB/s (5741kB/s)(5612KiB/1001msec) 00:10:16.712 slat (nsec): min=4809, max=71898, avg=17290.84, stdev=9795.46 00:10:16.712 clat (usec): min=257, max=41519, avg=423.94, stdev=1102.27 00:10:16.712 lat (usec): min=262, max=41533, avg=441.24, stdev=1102.56 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:10:16.712 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 371], 60.00th=[ 424], 00:10:16.712 | 70.00th=[ 465], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 545], 00:10:16.712 | 99.00th=[ 594], 99.50th=[ 676], 99.90th=[ 1106], 99.95th=[41681], 00:10:16.712 | 99.99th=[41681] 00:10:16.712 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:16.712 slat (nsec): min=6791, max=57892, avg=16097.19, stdev=7946.96 00:10:16.712 clat (usec): min=166, max=1329, avg=223.39, stdev=47.14 00:10:16.712 lat (usec): min=175, max=1347, avg=239.49, stdev=50.54 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:10:16.712 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 229], 00:10:16.712 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 277], 00:10:16.712 | 99.00th=[ 343], 99.50th=[ 379], 99.90th=[ 742], 99.95th=[ 1336], 00:10:16.712 | 99.99th=[ 1336] 00:10:16.712 bw ( KiB/s): min= 6944, max= 6944, per=38.01%, avg=6944.00, stdev= 0.00, samples=1 00:10:16.712 iops : min= 1736, max= 1736, avg=1736.00, stdev= 0.00, samples=1 00:10:16.712 lat (usec) : 250=43.11%, 500=49.27%, 750=7.42%, 1000=0.10% 00:10:16.712 lat (msec) : 2=0.07%, 50=0.03% 00:10:16.712 cpu : usr=3.20%, sys=5.40%, ctx=2942, majf=0, minf=1 00:10:16.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 issued rwts: total=1403,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.712 job3: (groupid=0, jobs=1): err= 0: pid=3735864: Sat Nov 2 11:22:16 2024 00:10:16.712 read: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec) 00:10:16.712 slat (nsec): min=5624, max=68730, avg=20769.67, stdev=10875.79 00:10:16.712 clat (usec): min=299, max=40979, avg=638.97, stdev=2824.19 00:10:16.712 lat (usec): min=306, max=40998, avg=659.74, stdev=2823.99 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 351], 20.00th=[ 367], 00:10:16.712 | 30.00th=[ 400], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 465], 00:10:16.712 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 553], 00:10:16.712 | 99.00th=[ 644], 99.50th=[ 766], 99.90th=[41157], 99.95th=[41157], 00:10:16.712 | 99.99th=[41157] 00:10:16.712 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:10:16.712 slat (usec): min=6, max=37137, avg=58.77, stdev=1162.00 00:10:16.712 clat (usec): min=171, max=513, avg=254.97, stdev=59.80 00:10:16.712 lat (usec): min=188, max=37521, avg=313.74, stdev=1167.81 00:10:16.712 clat percentiles (usec): 00:10:16.712 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 212], 00:10:16.712 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 251], 00:10:16.712 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 347], 95.00th=[ 396], 00:10:16.712 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 502], 99.95th=[ 515], 00:10:16.712 | 99.99th=[ 515] 00:10:16.712 bw ( KiB/s): min= 2536, max= 5656, per=22.42%, avg=4096.00, stdev=2206.17, samples=2 00:10:16.712 iops : min= 634, max= 1414, avg=1024.00, stdev=551.54, samples=2 00:10:16.712 lat (usec) : 250=30.03%, 500=59.18%, 750=10.50%, 1000=0.05% 00:10:16.712 lat (msec) : 50=0.24% 00:10:16.712 cpu : usr=1.98%, sys=4.46%, ctx=2052, majf=0, minf=1 00:10:16.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.712 issued rwts: total=1024,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.712 00:10:16.712 Run status group 0 (all jobs): 00:10:16.712 READ: bw=14.3MiB/s (15.0MB/s), 87.3KiB/s-5606KiB/s (89.4kB/s-5741kB/s), io=14.4MiB (15.1MB), run=1001-1009msec 00:10:16.712 WRITE: bw=17.8MiB/s (18.7MB/s), 2032KiB/s-6138KiB/s (2081kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1009msec 00:10:16.712 00:10:16.712 Disk stats (read/write): 00:10:16.712 nvme0n1: ios=1079/1281, merge=0/0, ticks=514/312, in_queue=826, util=86.57% 00:10:16.712 nvme0n2: ios=38/512, merge=0/0, ticks=1565/127, in_queue=1692, util=89.41% 00:10:16.712 nvme0n3: ios=1081/1421, merge=0/0, ticks=806/303, in_queue=1109, util=93.72% 00:10:16.712 nvme0n4: ios=1081/1024, merge=0/0, ticks=667/255, in_queue=922, util=95.46% 00:10:16.712 11:22:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:16.712 [global] 00:10:16.712 thread=1 00:10:16.712 invalidate=1 00:10:16.712 rw=randwrite 00:10:16.712 time_based=1 00:10:16.712 runtime=1 00:10:16.712 ioengine=libaio 00:10:16.712 direct=1 00:10:16.712 bs=4096 00:10:16.712 iodepth=1 00:10:16.712 norandommap=0 00:10:16.712 numjobs=1 00:10:16.712 00:10:16.712 verify_dump=1 00:10:16.712 verify_backlog=512 00:10:16.712 verify_state_save=0 00:10:16.712 do_verify=1 00:10:16.712 verify=crc32c-intel 00:10:16.712 [job0] 00:10:16.712 filename=/dev/nvme0n1 00:10:16.712 [job1] 00:10:16.712 filename=/dev/nvme0n2 00:10:16.712 [job2] 00:10:16.712 filename=/dev/nvme0n3 00:10:16.712 [job3] 00:10:16.712 filename=/dev/nvme0n4 00:10:16.712 Could not set queue depth (nvme0n1) 00:10:16.712 Could not set queue depth (nvme0n2) 00:10:16.712 Could not set queue depth (nvme0n3) 00:10:16.712 Could not set queue depth (nvme0n4) 00:10:16.712 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.712 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.712 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.712 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.712 fio-3.35 00:10:16.712 Starting 4 threads 00:10:18.085 00:10:18.085 job0: (groupid=0, jobs=1): err= 0: pid=3736092: Sat Nov 2 11:22:18 2024 00:10:18.085 read: IOPS=17, BW=71.9KiB/s (73.7kB/s)(72.0KiB/1001msec) 00:10:18.085 slat (nsec): min=15014, max=37167, avg=26017.33, stdev=9946.69 00:10:18.085 clat (usec): min=40757, max=41051, avg=40952.17, stdev=75.93 00:10:18.085 lat (usec): min=40792, max=41067, avg=40978.19, stdev=72.24 00:10:18.085 clat percentiles (usec): 00:10:18.085 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:18.085 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.085 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:18.085 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:18.085 | 99.99th=[41157] 00:10:18.085 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:18.085 slat (nsec): min=10693, max=73522, avg=30555.04, stdev=11809.23 00:10:18.085 clat (usec): min=214, max=772, avg=473.35, stdev=100.83 00:10:18.085 lat (usec): min=239, max=803, avg=503.90, stdev=99.78 00:10:18.085 clat percentiles (usec): 00:10:18.085 | 1.00th=[ 239], 5.00th=[ 289], 10.00th=[ 347], 20.00th=[ 400], 00:10:18.085 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 469], 60.00th=[ 498], 00:10:18.085 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 627], 00:10:18.085 | 99.00th=[ 709], 99.50th=[ 750], 99.90th=[ 775], 99.95th=[ 775], 00:10:18.085 | 99.99th=[ 775] 00:10:18.085 bw ( KiB/s): min= 4096, max= 4096, per=33.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.085 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.085 lat (usec) : 250=2.26%, 500=56.42%, 750=37.55%, 1000=0.38% 00:10:18.085 lat (msec) : 50=3.40% 00:10:18.085 cpu : usr=1.40%, sys=1.70%, ctx=530, majf=0, minf=1 00:10:18.085 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.085 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.085 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.085 job1: (groupid=0, jobs=1): err= 0: pid=3736093: Sat Nov 2 11:22:18 2024 00:10:18.085 read: IOPS=674, BW=2699KiB/s (2763kB/s)(2704KiB/1002msec) 00:10:18.085 slat (nsec): min=6699, max=43513, avg=16295.41, stdev=6599.07 00:10:18.086 clat (usec): min=250, max=42437, avg=1054.30, stdev=5180.92 00:10:18.086 lat (usec): min=257, max=42444, avg=1070.59, stdev=5181.08 00:10:18.086 clat percentiles (usec): 00:10:18.086 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 318], 00:10:18.086 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 367], 60.00th=[ 396], 00:10:18.086 | 70.00th=[ 433], 80.00th=[ 474], 90.00th=[ 523], 95.00th=[ 586], 00:10:18.086 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:18.086 | 99.99th=[42206] 00:10:18.086 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:10:18.086 slat (nsec): min=7151, max=66941, avg=18959.40, stdev=8479.09 00:10:18.086 clat (usec): min=173, max=437, avg=243.97, stdev=49.75 00:10:18.086 lat (usec): min=185, max=474, avg=262.93, stdev=52.99 00:10:18.086 clat percentiles (usec): 00:10:18.086 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:18.086 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 239], 00:10:18.086 | 70.00th=[ 255], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 355], 00:10:18.086 | 99.00th=[ 404], 99.50th=[ 408], 99.90th=[ 437], 99.95th=[ 437], 00:10:18.086 | 99.99th=[ 437] 00:10:18.086 bw ( KiB/s): min= 4096, max= 4096, per=33.60%, avg=4096.00, stdev= 0.00, samples=2 00:10:18.086 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:18.086 lat (usec) : 250=40.35%, 500=53.65%, 750=5.35% 00:10:18.086 lat (msec) : 50=0.65% 00:10:18.086 cpu : usr=2.40%, sys=4.00%, ctx=1701, majf=0, minf=1 00:10:18.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.086 issued rwts: total=676,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.086 job2: (groupid=0, jobs=1): err= 0: pid=3736094: Sat Nov 2 11:22:18 2024 00:10:18.086 read: IOPS=859, BW=3437KiB/s (3519kB/s)(3440KiB/1001msec) 00:10:18.086 slat (nsec): min=5930, max=69643, avg=23815.95, stdev=12051.17 00:10:18.086 clat (usec): min=247, max=42375, avg=854.15, stdev=4380.82 00:10:18.086 lat (usec): min=253, max=42409, avg=877.97, stdev=4381.45 00:10:18.086 clat percentiles (usec): 00:10:18.086 | 1.00th=[ 255], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:10:18.086 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 375], 60.00th=[ 392], 00:10:18.086 | 70.00th=[ 420], 80.00th=[ 445], 90.00th=[ 510], 95.00th=[ 570], 00:10:18.086 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:18.086 | 99.99th=[42206] 00:10:18.086 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:18.086 slat (nsec): min=6992, max=44450, avg=11882.17, stdev=5374.64 00:10:18.086 clat (usec): min=170, max=405, avg=217.08, stdev=36.88 00:10:18.086 lat (usec): min=178, max=423, avg=228.96, stdev=39.66 00:10:18.086 clat percentiles (usec): 00:10:18.086 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:10:18.086 | 30.00th=[ 192], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 221], 00:10:18.086 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 289], 00:10:18.086 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 404], 99.95th=[ 404], 00:10:18.086 | 99.99th=[ 404] 00:10:18.086 bw ( KiB/s): min= 4096, max= 4096, per=33.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.086 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.086 lat (usec) : 250=48.57%, 500=46.60%, 750=4.25%, 1000=0.05% 00:10:18.086 lat (msec) : 50=0.53% 00:10:18.086 cpu : usr=1.20%, sys=3.80%, ctx=1886, majf=0, minf=1 00:10:18.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.086 issued rwts: total=860,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.086 job3: (groupid=0, jobs=1): err= 0: pid=3736095: Sat Nov 2 11:22:18 2024 00:10:18.086 read: IOPS=458, BW=1833KiB/s (1877kB/s)(1848KiB/1008msec) 00:10:18.086 slat (nsec): min=5704, max=22818, avg=12739.46, stdev=3848.97 00:10:18.086 clat (usec): min=310, max=41988, avg=1681.18, stdev=6974.97 00:10:18.086 lat (usec): min=325, max=42004, avg=1693.92, stdev=6975.26 00:10:18.086 clat percentiles (usec): 00:10:18.086 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 379], 20.00th=[ 396], 00:10:18.086 | 30.00th=[ 404], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 457], 00:10:18.086 | 70.00th=[ 482], 80.00th=[ 519], 90.00th=[ 570], 95.00th=[ 619], 00:10:18.086 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:18.086 | 99.99th=[42206] 00:10:18.086 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:18.086 slat (nsec): min=7660, max=37080, avg=11580.04, stdev=4131.73 00:10:18.086 clat (usec): min=240, max=646, avg=420.37, stdev=106.45 00:10:18.086 lat (usec): min=249, max=683, avg=431.95, stdev=105.05 00:10:18.086 clat percentiles (usec): 00:10:18.086 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 297], 00:10:18.086 | 30.00th=[ 351], 40.00th=[ 379], 50.00th=[ 420], 60.00th=[ 469], 00:10:18.086 | 70.00th=[ 498], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 586], 00:10:18.086 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 644], 99.95th=[ 644], 00:10:18.086 | 99.99th=[ 644] 00:10:18.086 bw ( KiB/s): min= 4096, max= 4096, per=33.60%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.086 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.086 lat (usec) : 250=0.31%, 500=72.90%, 750=25.36% 00:10:18.086 lat (msec) : 50=1.44% 00:10:18.086 cpu : usr=0.60%, sys=1.19%, ctx=974, majf=0, minf=1 00:10:18.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.086 issued rwts: total=462,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.086 00:10:18.086 Run status group 0 (all jobs): 00:10:18.086 READ: bw=8000KiB/s (8192kB/s), 71.9KiB/s-3437KiB/s (73.7kB/s-3519kB/s), io=8064KiB (8258kB), run=1001-1008msec 00:10:18.086 WRITE: bw=11.9MiB/s (12.5MB/s), 2032KiB/s-4092KiB/s (2081kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1008msec 00:10:18.086 00:10:18.086 Disk stats (read/write): 00:10:18.086 nvme0n1: ios=64/512, merge=0/0, ticks=606/218, in_queue=824, util=87.17% 00:10:18.086 nvme0n2: ios=562/807, merge=0/0, ticks=675/194, in_queue=869, util=91.16% 00:10:18.086 nvme0n3: ios=787/1024, merge=0/0, ticks=1497/213, in_queue=1710, util=100.00% 00:10:18.086 nvme0n4: ios=515/512, merge=0/0, ticks=700/197, in_queue=897, util=95.46% 00:10:18.086 11:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:18.086 [global] 00:10:18.086 thread=1 00:10:18.086 invalidate=1 00:10:18.086 rw=write 00:10:18.087 time_based=1 00:10:18.087 runtime=1 00:10:18.087 ioengine=libaio 00:10:18.087 direct=1 00:10:18.087 bs=4096 00:10:18.087 iodepth=128 00:10:18.087 norandommap=0 00:10:18.087 numjobs=1 00:10:18.087 00:10:18.087 verify_dump=1 00:10:18.087 verify_backlog=512 00:10:18.087 verify_state_save=0 00:10:18.087 do_verify=1 00:10:18.087 verify=crc32c-intel 00:10:18.087 [job0] 00:10:18.087 filename=/dev/nvme0n1 00:10:18.087 [job1] 00:10:18.087 filename=/dev/nvme0n2 00:10:18.087 [job2] 00:10:18.087 filename=/dev/nvme0n3 00:10:18.087 [job3] 00:10:18.087 filename=/dev/nvme0n4 00:10:18.087 Could not set queue depth (nvme0n1) 00:10:18.087 Could not set queue depth (nvme0n2) 00:10:18.087 Could not set queue depth (nvme0n3) 00:10:18.087 Could not set queue depth (nvme0n4) 00:10:18.087 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.087 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.087 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.087 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.087 fio-3.35 00:10:18.087 Starting 4 threads 00:10:19.460 00:10:19.460 job0: (groupid=0, jobs=1): err= 0: pid=3736324: Sat Nov 2 11:22:19 2024 00:10:19.460 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:10:19.460 slat (usec): min=2, max=13252, avg=102.69, stdev=750.56 00:10:19.460 clat (usec): min=3924, max=37474, avg=13740.60, stdev=4167.00 00:10:19.460 lat (usec): min=3928, max=37488, avg=13843.29, stdev=4209.17 00:10:19.460 clat percentiles (usec): 00:10:19.460 | 1.00th=[ 7701], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[10814], 00:10:19.460 | 30.00th=[11076], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:10:19.460 | 70.00th=[14353], 80.00th=[15926], 90.00th=[20317], 95.00th=[21365], 00:10:19.460 | 99.00th=[25822], 99.50th=[25822], 99.90th=[27919], 99.95th=[27919], 00:10:19.460 | 99.99th=[37487] 00:10:19.460 write: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1010msec); 0 zone resets 00:10:19.460 slat (usec): min=3, max=44976, avg=111.09, stdev=918.73 00:10:19.460 clat (usec): min=988, max=92387, avg=14281.34, stdev=7560.38 00:10:19.460 lat (usec): min=1019, max=92400, avg=14392.44, stdev=7684.84 00:10:19.460 clat percentiles (usec): 00:10:19.460 | 1.00th=[ 3982], 5.00th=[ 6128], 10.00th=[ 7570], 20.00th=[10028], 00:10:19.460 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[12387], 00:10:19.460 | 70.00th=[15926], 80.00th=[19530], 90.00th=[22414], 95.00th=[28181], 00:10:19.460 | 99.00th=[34341], 99.50th=[35914], 99.90th=[92799], 99.95th=[92799], 00:10:19.460 | 99.99th=[92799] 00:10:19.460 bw ( KiB/s): min=16384, max=18104, per=29.12%, avg=17244.00, stdev=1216.22, samples=2 00:10:19.460 iops : min= 4096, max= 4526, avg=4311.00, stdev=304.06, samples=2 00:10:19.460 lat (usec) : 1000=0.01% 00:10:19.460 lat (msec) : 2=0.07%, 4=0.53%, 10=17.53%, 20=68.41%, 50=13.26% 00:10:19.460 lat (msec) : 100=0.19% 00:10:19.460 cpu : usr=3.07%, sys=8.03%, ctx=328, majf=0, minf=1 00:10:19.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:19.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.460 issued rwts: total=4096,4438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.460 job1: (groupid=0, jobs=1): err= 0: pid=3736325: Sat Nov 2 11:22:19 2024 00:10:19.460 read: IOPS=3585, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1006msec) 00:10:19.460 slat (nsec): min=1950, max=14325k, avg=139473.62, stdev=839507.96 00:10:19.460 clat (usec): min=1100, max=46388, avg=16540.44, stdev=4645.14 00:10:19.460 lat (usec): min=6366, max=46396, avg=16679.92, stdev=4722.48 00:10:19.461 clat percentiles (usec): 00:10:19.461 | 1.00th=[ 8848], 5.00th=[11207], 10.00th=[11469], 20.00th=[12649], 00:10:19.461 | 30.00th=[13698], 40.00th=[15008], 50.00th=[15664], 60.00th=[16712], 00:10:19.461 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23725], 95.00th=[24773], 00:10:19.461 | 99.00th=[28181], 99.50th=[31851], 99.90th=[41681], 99.95th=[41681], 00:10:19.461 | 99.99th=[46400] 00:10:19.461 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:10:19.461 slat (usec): min=2, max=8018, avg=113.58, stdev=664.05 00:10:19.461 clat (usec): min=6539, max=54682, avg=16506.81, stdev=7950.07 00:10:19.461 lat (usec): min=6545, max=54696, avg=16620.40, stdev=8000.00 00:10:19.461 clat percentiles (usec): 00:10:19.461 | 1.00th=[ 7111], 5.00th=[10552], 10.00th=[11338], 20.00th=[11863], 00:10:19.461 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13566], 60.00th=[14484], 00:10:19.461 | 70.00th=[15795], 80.00th=[18482], 90.00th=[27395], 95.00th=[35390], 00:10:19.461 | 99.00th=[49546], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:10:19.461 | 99.99th=[54789] 00:10:19.461 bw ( KiB/s): min=15872, max=16056, per=26.96%, avg=15964.00, stdev=130.11, samples=2 00:10:19.461 iops : min= 3968, max= 4014, avg=3991.00, stdev=32.53, samples=2 00:10:19.461 lat (msec) : 2=0.01%, 10=3.17%, 20=77.97%, 50=18.36%, 100=0.49% 00:10:19.461 cpu : usr=4.88%, sys=6.27%, ctx=308, majf=0, minf=1 00:10:19.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:19.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.461 issued rwts: total=3607,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.461 job2: (groupid=0, jobs=1): err= 0: pid=3736326: Sat Nov 2 11:22:19 2024 00:10:19.461 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:10:19.461 slat (usec): min=2, max=16388, avg=217.74, stdev=1155.20 00:10:19.461 clat (usec): min=2576, max=59866, avg=26926.02, stdev=12371.57 00:10:19.461 lat (usec): min=2584, max=59870, avg=27143.75, stdev=12447.40 00:10:19.461 clat percentiles (usec): 00:10:19.461 | 1.00th=[ 6521], 5.00th=[ 7635], 10.00th=[12125], 20.00th=[15401], 00:10:19.461 | 30.00th=[19792], 40.00th=[22938], 50.00th=[25560], 60.00th=[29492], 00:10:19.461 | 70.00th=[32900], 80.00th=[35914], 90.00th=[45351], 95.00th=[52167], 00:10:19.461 | 99.00th=[58459], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:10:19.461 | 99.99th=[60031] 00:10:19.461 write: IOPS=2941, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1007msec); 0 zone resets 00:10:19.461 slat (usec): min=3, max=14448, avg=143.58, stdev=915.57 00:10:19.461 clat (usec): min=3079, max=50687, avg=19682.89, stdev=7794.46 00:10:19.461 lat (usec): min=5928, max=50701, avg=19826.47, stdev=7822.18 00:10:19.461 clat percentiles (usec): 00:10:19.461 | 1.00th=[ 6652], 5.00th=[10028], 10.00th=[11731], 20.00th=[13042], 00:10:19.461 | 30.00th=[13435], 40.00th=[17433], 50.00th=[18220], 60.00th=[20055], 00:10:19.461 | 70.00th=[23200], 80.00th=[25035], 90.00th=[28705], 95.00th=[35914], 00:10:19.461 | 99.00th=[47973], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:10:19.461 | 99.99th=[50594] 00:10:19.461 bw ( KiB/s): min=10384, max=12288, per=19.14%, avg=11336.00, stdev=1346.33, samples=2 00:10:19.461 iops : min= 2596, max= 3072, avg=2834.00, stdev=336.58, samples=2 00:10:19.461 lat (msec) : 4=0.45%, 10=5.99%, 20=39.97%, 50=50.25%, 100=3.33% 00:10:19.461 cpu : usr=2.78%, sys=3.28%, ctx=244, majf=0, minf=1 00:10:19.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:19.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.461 issued rwts: total=2560,2962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.461 job3: (groupid=0, jobs=1): err= 0: pid=3736327: Sat Nov 2 11:22:19 2024 00:10:19.461 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:10:19.461 slat (usec): min=3, max=12169, avg=152.68, stdev=866.59 00:10:19.461 clat (usec): min=7291, max=39365, avg=20122.45, stdev=6482.64 00:10:19.461 lat (usec): min=7298, max=39384, avg=20275.13, stdev=6502.78 00:10:19.461 clat percentiles (usec): 00:10:19.461 | 1.00th=[10421], 5.00th=[11600], 10.00th=[13173], 20.00th=[14615], 00:10:19.461 | 30.00th=[15533], 40.00th=[17171], 50.00th=[19268], 60.00th=[20317], 00:10:19.461 | 70.00th=[22414], 80.00th=[25560], 90.00th=[29754], 95.00th=[33162], 00:10:19.461 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:10:19.461 | 99.99th=[39584] 00:10:19.461 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1007msec); 0 zone resets 00:10:19.461 slat (usec): min=4, max=22312, avg=143.43, stdev=890.18 00:10:19.461 clat (usec): min=598, max=43793, avg=18247.74, stdev=7313.36 00:10:19.461 lat (usec): min=771, max=43806, avg=18391.17, stdev=7366.45 00:10:19.461 clat percentiles (usec): 00:10:19.461 | 1.00th=[ 7111], 5.00th=[10814], 10.00th=[12780], 20.00th=[13698], 00:10:19.461 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15533], 60.00th=[16581], 00:10:19.461 | 70.00th=[20055], 80.00th=[22676], 90.00th=[28705], 95.00th=[34341], 00:10:19.461 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:10:19.461 | 99.99th=[43779] 00:10:19.461 bw ( KiB/s): min=11848, max=14784, per=22.49%, avg=13316.00, stdev=2076.07, samples=2 00:10:19.461 iops : min= 2962, max= 3696, avg=3329.00, stdev=519.02, samples=2 00:10:19.461 lat (usec) : 750=0.03%, 1000=0.02% 00:10:19.461 lat (msec) : 2=0.12%, 10=2.31%, 20=62.00%, 50=35.52% 00:10:19.461 cpu : usr=4.77%, sys=7.85%, ctx=307, majf=0, minf=1 00:10:19.461 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:19.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.461 issued rwts: total=3072,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.461 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.461 00:10:19.461 Run status group 0 (all jobs): 00:10:19.461 READ: bw=51.6MiB/s (54.1MB/s), 9.93MiB/s-15.8MiB/s (10.4MB/s-16.6MB/s), io=52.1MiB (54.6MB), run=1006-1010msec 00:10:19.461 WRITE: bw=57.8MiB/s (60.6MB/s), 11.5MiB/s-17.2MiB/s (12.0MB/s-18.0MB/s), io=58.4MiB (61.2MB), run=1006-1010msec 00:10:19.461 00:10:19.461 Disk stats (read/write): 00:10:19.461 nvme0n1: ios=3613/3607, merge=0/0, ticks=30548/29138, in_queue=59686, util=97.49% 00:10:19.461 nvme0n2: ios=3121/3142, merge=0/0, ticks=26387/23702, in_queue=50089, util=91.88% 00:10:19.461 nvme0n3: ios=2101/2407, merge=0/0, ticks=18992/14409, in_queue=33401, util=97.50% 00:10:19.461 nvme0n4: ios=2708/3072, merge=0/0, ticks=27210/29632, in_queue=56842, util=98.32% 00:10:19.461 11:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:19.461 [global] 00:10:19.461 thread=1 00:10:19.461 invalidate=1 00:10:19.461 rw=randwrite 00:10:19.461 time_based=1 00:10:19.461 runtime=1 00:10:19.461 ioengine=libaio 00:10:19.461 direct=1 00:10:19.461 bs=4096 00:10:19.461 iodepth=128 00:10:19.461 norandommap=0 00:10:19.461 numjobs=1 00:10:19.461 00:10:19.461 verify_dump=1 00:10:19.461 verify_backlog=512 00:10:19.461 verify_state_save=0 00:10:19.461 do_verify=1 00:10:19.461 verify=crc32c-intel 00:10:19.461 [job0] 00:10:19.461 filename=/dev/nvme0n1 00:10:19.461 [job1] 00:10:19.461 filename=/dev/nvme0n2 00:10:19.461 [job2] 00:10:19.461 filename=/dev/nvme0n3 00:10:19.461 [job3] 00:10:19.461 filename=/dev/nvme0n4 00:10:19.461 Could not set queue depth (nvme0n1) 00:10:19.461 Could not set queue depth (nvme0n2) 00:10:19.461 Could not set queue depth (nvme0n3) 00:10:19.461 Could not set queue depth (nvme0n4) 00:10:19.718 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.719 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.719 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.719 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.719 fio-3.35 00:10:19.719 Starting 4 threads 00:10:21.091 00:10:21.091 job0: (groupid=0, jobs=1): err= 0: pid=3736559: Sat Nov 2 11:22:21 2024 00:10:21.091 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:10:21.091 slat (usec): min=2, max=18812, avg=91.25, stdev=756.65 00:10:21.091 clat (usec): min=1200, max=59388, avg=13600.67, stdev=6264.56 00:10:21.091 lat (usec): min=1211, max=59396, avg=13691.92, stdev=6290.73 00:10:21.091 clat percentiles (usec): 00:10:21.091 | 1.00th=[ 5276], 5.00th=[ 5932], 10.00th=[ 9241], 20.00th=[10028], 00:10:21.091 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:10:21.091 | 70.00th=[13435], 80.00th=[16909], 90.00th=[21890], 95.00th=[28443], 00:10:21.091 | 99.00th=[36963], 99.50th=[36963], 99.90th=[45351], 99.95th=[45351], 00:10:21.091 | 99.99th=[59507] 00:10:21.091 write: IOPS=5097, BW=19.9MiB/s (20.9MB/s)(20.2MiB/1012msec); 0 zone resets 00:10:21.091 slat (usec): min=3, max=10823, avg=75.25, stdev=591.61 00:10:21.091 clat (usec): min=874, max=27031, avg=11391.36, stdev=3283.11 00:10:21.091 lat (usec): min=883, max=27042, avg=11466.61, stdev=3323.72 00:10:21.091 clat percentiles (usec): 00:10:21.091 | 1.00th=[ 5014], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 8979], 00:10:21.091 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:10:21.091 | 70.00th=[12125], 80.00th=[12518], 90.00th=[15270], 95.00th=[17695], 00:10:21.091 | 99.00th=[20841], 99.50th=[22414], 99.90th=[22414], 99.95th=[23200], 00:10:21.091 | 99.99th=[27132] 00:10:21.091 bw ( KiB/s): min=20480, max=20480, per=32.82%, avg=20480.00, stdev= 0.00, samples=2 00:10:21.091 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:21.091 lat (usec) : 1000=0.02% 00:10:21.091 lat (msec) : 2=0.05%, 4=0.30%, 10=22.19%, 20=68.17%, 50=9.26% 00:10:21.091 lat (msec) : 100=0.01% 00:10:21.091 cpu : usr=4.85%, sys=9.20%, ctx=315, majf=0, minf=1 00:10:21.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:21.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.091 issued rwts: total=5120,5159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.091 job1: (groupid=0, jobs=1): err= 0: pid=3736560: Sat Nov 2 11:22:21 2024 00:10:21.091 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:21.091 slat (usec): min=2, max=11711, avg=112.73, stdev=774.61 00:10:21.091 clat (usec): min=6880, max=45155, avg=15448.99, stdev=6589.41 00:10:21.091 lat (usec): min=6884, max=45169, avg=15561.73, stdev=6660.11 00:10:21.091 clat percentiles (usec): 00:10:21.091 | 1.00th=[ 8848], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:10:21.091 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:10:21.091 | 70.00th=[13960], 80.00th=[16581], 90.00th=[28705], 95.00th=[33424], 00:10:21.091 | 99.00th=[36439], 99.50th=[39060], 99.90th=[40109], 99.95th=[40633], 00:10:21.091 | 99.99th=[45351] 00:10:21.091 write: IOPS=4464, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1002msec); 0 zone resets 00:10:21.091 slat (usec): min=3, max=10762, avg=106.10, stdev=705.25 00:10:21.091 clat (usec): min=667, max=33670, avg=14139.78, stdev=4650.99 00:10:21.091 lat (usec): min=944, max=33684, avg=14245.88, stdev=4681.40 00:10:21.091 clat percentiles (usec): 00:10:21.091 | 1.00th=[ 4490], 5.00th=[ 8455], 10.00th=[10945], 20.00th=[11731], 00:10:21.091 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:10:21.091 | 70.00th=[13698], 80.00th=[17171], 90.00th=[21365], 95.00th=[24773], 00:10:21.091 | 99.00th=[27919], 99.50th=[28181], 99.90th=[31065], 99.95th=[33817], 00:10:21.091 | 99.99th=[33817] 00:10:21.091 bw ( KiB/s): min=16384, max=16384, per=26.25%, avg=16384.00, stdev= 0.00, samples=1 00:10:21.091 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:21.091 lat (usec) : 750=0.01%, 1000=0.11% 00:10:21.091 lat (msec) : 4=0.16%, 10=5.16%, 20=79.67%, 50=14.89% 00:10:21.091 cpu : usr=4.50%, sys=7.19%, ctx=253, majf=0, minf=1 00:10:21.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:21.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.091 issued rwts: total=4096,4473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.091 job2: (groupid=0, jobs=1): err= 0: pid=3736561: Sat Nov 2 11:22:21 2024 00:10:21.091 read: IOPS=2572, BW=10.0MiB/s (10.5MB/s)(10.5MiB/1044msec) 00:10:21.091 slat (usec): min=3, max=16669, avg=173.27, stdev=1042.65 00:10:21.091 clat (usec): min=10041, max=79529, avg=24182.76, stdev=12385.62 00:10:21.091 lat (usec): min=10050, max=84664, avg=24356.02, stdev=12468.18 00:10:21.091 clat percentiles (usec): 00:10:21.091 | 1.00th=[13173], 5.00th=[13435], 10.00th=[13566], 20.00th=[15926], 00:10:21.091 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20055], 60.00th=[21103], 00:10:21.091 | 70.00th=[23725], 80.00th=[27657], 90.00th=[42730], 95.00th=[56886], 00:10:21.091 | 99.00th=[67634], 99.50th=[69731], 99.90th=[79168], 99.95th=[79168], 00:10:21.091 | 99.99th=[79168] 00:10:21.091 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:10:21.091 slat (usec): min=4, max=21606, avg=164.50, stdev=996.99 00:10:21.091 clat (usec): min=10010, max=75407, avg=21679.15, stdev=10297.63 00:10:21.091 lat (usec): min=10019, max=75428, avg=21843.65, stdev=10379.27 00:10:21.091 clat percentiles (usec): 00:10:21.091 | 1.00th=[10421], 5.00th=[12649], 10.00th=[13173], 20.00th=[14222], 00:10:21.091 | 30.00th=[15008], 40.00th=[17433], 50.00th=[19530], 60.00th=[21103], 00:10:21.091 | 70.00th=[22938], 80.00th=[27395], 90.00th=[33162], 95.00th=[35390], 00:10:21.091 | 99.00th=[68682], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:10:21.091 | 99.99th=[74974] 00:10:21.091 bw ( KiB/s): min=12272, max=12288, per=19.68%, avg=12280.00, stdev=11.31, samples=2 00:10:21.091 iops : min= 3068, max= 3072, avg=3070.00, stdev= 2.83, samples=2 00:10:21.091 lat (msec) : 20=49.50%, 50=45.66%, 100=4.85% 00:10:21.091 cpu : usr=3.84%, sys=6.23%, ctx=299, majf=0, minf=1 00:10:21.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:21.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.091 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.091 job3: (groupid=0, jobs=1): err= 0: pid=3736562: Sat Nov 2 11:22:21 2024 00:10:21.091 read: IOPS=3341, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1012msec) 00:10:21.091 slat (usec): min=2, max=14591, avg=136.64, stdev=887.80 00:10:21.092 clat (usec): min=5898, max=33531, avg=17586.36, stdev=4252.22 00:10:21.092 lat (usec): min=9579, max=37425, avg=17723.00, stdev=4304.17 00:10:21.092 clat percentiles (usec): 00:10:21.092 | 1.00th=[11338], 5.00th=[12911], 10.00th=[13304], 20.00th=[14484], 00:10:21.092 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16057], 60.00th=[16909], 00:10:21.092 | 70.00th=[18744], 80.00th=[21103], 90.00th=[24511], 95.00th=[26084], 00:10:21.092 | 99.00th=[29754], 99.50th=[29754], 99.90th=[29754], 99.95th=[32900], 00:10:21.092 | 99.99th=[33424] 00:10:21.092 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:10:21.092 slat (usec): min=3, max=15761, avg=135.72, stdev=764.92 00:10:21.092 clat (usec): min=642, max=84773, avg=19099.98, stdev=11808.29 00:10:21.092 lat (usec): min=656, max=84779, avg=19235.70, stdev=11890.80 00:10:21.092 clat percentiles (usec): 00:10:21.092 | 1.00th=[ 1516], 5.00th=[ 6456], 10.00th=[11731], 20.00th=[15008], 00:10:21.092 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16188], 60.00th=[17171], 00:10:21.092 | 70.00th=[18744], 80.00th=[20317], 90.00th=[25297], 95.00th=[46924], 00:10:21.092 | 99.00th=[76022], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:10:21.092 | 99.99th=[84411] 00:10:21.092 bw ( KiB/s): min=12400, max=16272, per=22.97%, avg=14336.00, stdev=2737.92, samples=2 00:10:21.092 iops : min= 3100, max= 4068, avg=3584.00, stdev=684.48, samples=2 00:10:21.092 lat (usec) : 750=0.04%, 1000=0.20% 00:10:21.092 lat (msec) : 2=0.30%, 4=0.55%, 10=4.12%, 20=69.77%, 50=22.78% 00:10:21.092 lat (msec) : 100=2.24% 00:10:21.092 cpu : usr=3.76%, sys=5.44%, ctx=349, majf=0, minf=1 00:10:21.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:21.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.092 issued rwts: total=3382,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.092 00:10:21.092 Run status group 0 (all jobs): 00:10:21.092 READ: bw=57.2MiB/s (60.0MB/s), 10.0MiB/s-19.8MiB/s (10.5MB/s-20.7MB/s), io=59.7MiB (62.6MB), run=1002-1044msec 00:10:21.092 WRITE: bw=60.9MiB/s (63.9MB/s), 11.5MiB/s-19.9MiB/s (12.1MB/s-20.9MB/s), io=63.6MiB (66.7MB), run=1002-1044msec 00:10:21.092 00:10:21.092 Disk stats (read/write): 00:10:21.092 nvme0n1: ios=4364/4608, merge=0/0, ticks=47394/44312, in_queue=91706, util=93.39% 00:10:21.092 nvme0n2: ios=3423/3584, merge=0/0, ticks=25094/25745, in_queue=50839, util=89.01% 00:10:21.092 nvme0n3: ios=2531/2560, merge=0/0, ticks=26875/26063, in_queue=52938, util=98.95% 00:10:21.092 nvme0n4: ios=2611/2999, merge=0/0, ticks=16536/22304, in_queue=38840, util=99.37% 00:10:21.092 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:21.092 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3736753 00:10:21.092 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:21.092 11:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:21.092 [global] 00:10:21.092 thread=1 00:10:21.092 invalidate=1 00:10:21.092 rw=read 00:10:21.092 time_based=1 00:10:21.092 runtime=10 00:10:21.092 ioengine=libaio 00:10:21.092 direct=1 00:10:21.092 bs=4096 00:10:21.092 iodepth=1 00:10:21.092 norandommap=1 00:10:21.092 numjobs=1 00:10:21.092 00:10:21.092 [job0] 00:10:21.092 filename=/dev/nvme0n1 00:10:21.092 [job1] 00:10:21.092 filename=/dev/nvme0n2 00:10:21.092 [job2] 00:10:21.092 filename=/dev/nvme0n3 00:10:21.092 [job3] 00:10:21.092 filename=/dev/nvme0n4 00:10:21.092 Could not set queue depth (nvme0n1) 00:10:21.092 Could not set queue depth (nvme0n2) 00:10:21.092 Could not set queue depth (nvme0n3) 00:10:21.092 Could not set queue depth (nvme0n4) 00:10:21.092 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.092 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.092 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.092 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.092 fio-3.35 00:10:21.092 Starting 4 threads 00:10:24.370 11:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:24.370 11:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:24.370 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=311296, buflen=4096 00:10:24.370 fio: pid=3736911, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:24.370 11:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.370 11:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:24.370 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15876096, buflen=4096 00:10:24.370 fio: pid=3736910, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:24.935 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.935 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:24.935 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=438272, buflen=4096 00:10:24.935 fio: pid=3736908, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.193 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4059136, buflen=4096 00:10:25.193 fio: pid=3736909, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:25.193 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.193 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:25.193 00:10:25.193 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3736908: Sat Nov 2 11:22:25 2024 00:10:25.193 read: IOPS=31, BW=123KiB/s (126kB/s)(428KiB/3468msec) 00:10:25.193 slat (usec): min=8, max=12402, avg=192.60, stdev=1311.90 00:10:25.193 clat (usec): min=394, max=43035, avg=31960.39, stdev=16972.95 00:10:25.193 lat (usec): min=428, max=53557, avg=32154.62, stdev=17112.53 00:10:25.193 clat percentiles (usec): 00:10:25.193 | 1.00th=[ 404], 5.00th=[ 515], 10.00th=[ 537], 20.00th=[ 627], 00:10:25.193 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.193 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:25.193 | 99.00th=[41681], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:25.193 | 99.99th=[43254] 00:10:25.193 bw ( KiB/s): min= 96, max= 184, per=2.39%, avg=128.00, stdev=33.56, samples=6 00:10:25.193 iops : min= 24, max= 46, avg=32.00, stdev= 8.39, samples=6 00:10:25.193 lat (usec) : 500=3.70%, 750=18.52% 00:10:25.193 lat (msec) : 50=76.85% 00:10:25.193 cpu : usr=0.00%, sys=0.14%, ctx=112, majf=0, minf=2 00:10:25.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.193 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.193 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.193 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3736909: Sat Nov 2 11:22:25 2024 00:10:25.193 read: IOPS=263, BW=1053KiB/s (1078kB/s)(3964KiB/3766msec) 00:10:25.193 slat (usec): min=7, max=25823, avg=47.64, stdev=866.04 00:10:25.193 clat (usec): min=255, max=41996, avg=3738.89, stdev=11340.96 00:10:25.193 lat (usec): min=263, max=66989, avg=3786.57, stdev=11490.48 00:10:25.193 clat percentiles (usec): 00:10:25.193 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:10:25.193 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:10:25.193 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[41157], 00:10:25.193 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:25.193 | 99.99th=[42206] 00:10:25.193 bw ( KiB/s): min= 93, max= 4352, per=20.97%, avg=1125.29, stdev=1798.70, samples=7 00:10:25.193 iops : min= 23, max= 1088, avg=281.29, stdev=449.70, samples=7 00:10:25.193 lat (usec) : 500=91.33%, 750=0.10% 00:10:25.193 lat (msec) : 50=8.47% 00:10:25.193 cpu : usr=0.13%, sys=0.58%, ctx=995, majf=0, minf=2 00:10:25.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.193 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.193 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.194 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3736910: Sat Nov 2 11:22:25 2024 00:10:25.194 read: IOPS=1215, BW=4862KiB/s (4978kB/s)(15.1MiB/3189msec) 00:10:25.194 slat (usec): min=4, max=5859, avg=13.54, stdev=94.16 00:10:25.194 clat (usec): min=241, max=41115, avg=799.18, stdev=4255.52 00:10:25.194 lat (usec): min=247, max=46975, avg=812.72, stdev=4272.04 00:10:25.194 clat percentiles (usec): 00:10:25.194 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:10:25.194 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 347], 00:10:25.194 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 486], 95.00th=[ 529], 00:10:25.194 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.194 | 99.99th=[41157] 00:10:25.194 bw ( KiB/s): min= 96, max=11552, per=96.24%, avg=5162.67, stdev=5597.68, samples=6 00:10:25.194 iops : min= 24, max= 2888, avg=1290.67, stdev=1399.42, samples=6 00:10:25.194 lat (usec) : 250=0.23%, 500=91.00%, 750=7.56%, 1000=0.03% 00:10:25.194 lat (msec) : 2=0.05%, 50=1.11% 00:10:25.194 cpu : usr=0.91%, sys=1.98%, ctx=3878, majf=0, minf=2 00:10:25.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.194 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.194 issued rwts: total=3877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.194 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3736911: Sat Nov 2 11:22:25 2024 00:10:25.194 read: IOPS=26, BW=105KiB/s (107kB/s)(304KiB/2900msec) 00:10:25.194 slat (nsec): min=9951, max=55776, avg=22209.70, stdev=10500.98 00:10:25.194 clat (usec): min=505, max=41357, avg=37812.88, stdev=10962.33 00:10:25.194 lat (usec): min=537, max=41390, avg=37834.89, stdev=10959.08 00:10:25.194 clat percentiles (usec): 00:10:25.194 | 1.00th=[ 506], 5.00th=[ 586], 10.00th=[40633], 20.00th=[41157], 00:10:25.194 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.194 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:25.194 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.194 | 99.99th=[41157] 00:10:25.194 bw ( KiB/s): min= 96, max= 128, per=1.96%, avg=105.60, stdev=13.15, samples=5 00:10:25.194 iops : min= 24, max= 32, avg=26.40, stdev= 3.29, samples=5 00:10:25.194 lat (usec) : 750=6.49%, 1000=1.30% 00:10:25.194 lat (msec) : 50=90.91% 00:10:25.194 cpu : usr=0.00%, sys=0.10%, ctx=77, majf=0, minf=1 00:10:25.194 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.194 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.194 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.194 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.194 00:10:25.194 Run status group 0 (all jobs): 00:10:25.194 READ: bw=5364KiB/s (5493kB/s), 105KiB/s-4862KiB/s (107kB/s-4978kB/s), io=19.7MiB (20.7MB), run=2900-3766msec 00:10:25.194 00:10:25.194 Disk stats (read/write): 00:10:25.194 nvme0n1: ios=145/0, merge=0/0, ticks=4403/0, in_queue=4403, util=99.43% 00:10:25.194 nvme0n2: ios=987/0, merge=0/0, ticks=3535/0, in_queue=3535, util=95.69% 00:10:25.194 nvme0n3: ios=3874/0, merge=0/0, ticks=2991/0, in_queue=2991, util=96.63% 00:10:25.194 nvme0n4: ios=75/0, merge=0/0, ticks=2834/0, in_queue=2834, util=96.75% 00:10:25.452 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.452 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:25.710 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.710 11:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:25.968 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.968 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:26.226 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.226 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3736753 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:26.484 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.742 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:26.742 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.742 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:26.742 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:26.742 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:26.742 nvmf hotplug test: fio failed as expected 00:10:26.742 11:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:27.000 rmmod nvme_tcp 00:10:27.000 rmmod nvme_fabrics 00:10:27.000 rmmod nvme_keyring 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3734725 ']' 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3734725 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3734725 ']' 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3734725 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3734725 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3734725' 00:10:27.000 killing process with pid 3734725 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3734725 00:10:27.000 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3734725 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.259 11:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.162 00:10:29.162 real 0m24.066s 00:10:29.162 user 1m25.033s 00:10:29.162 sys 0m6.463s 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.162 ************************************ 00:10:29.162 END TEST nvmf_fio_target 00:10:29.162 ************************************ 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.162 11:22:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.420 ************************************ 00:10:29.420 START TEST nvmf_bdevio 00:10:29.420 ************************************ 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:29.420 * Looking for test storage... 00:10:29.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:29.420 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.421 --rc genhtml_branch_coverage=1 00:10:29.421 --rc genhtml_function_coverage=1 00:10:29.421 --rc genhtml_legend=1 00:10:29.421 --rc geninfo_all_blocks=1 00:10:29.421 --rc geninfo_unexecuted_blocks=1 00:10:29.421 00:10:29.421 ' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.421 --rc genhtml_branch_coverage=1 00:10:29.421 --rc genhtml_function_coverage=1 00:10:29.421 --rc genhtml_legend=1 00:10:29.421 --rc geninfo_all_blocks=1 00:10:29.421 --rc geninfo_unexecuted_blocks=1 00:10:29.421 00:10:29.421 ' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.421 --rc genhtml_branch_coverage=1 00:10:29.421 --rc genhtml_function_coverage=1 00:10:29.421 --rc genhtml_legend=1 00:10:29.421 --rc geninfo_all_blocks=1 00:10:29.421 --rc geninfo_unexecuted_blocks=1 00:10:29.421 00:10:29.421 ' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.421 --rc genhtml_branch_coverage=1 00:10:29.421 --rc genhtml_function_coverage=1 00:10:29.421 --rc genhtml_legend=1 00:10:29.421 --rc geninfo_all_blocks=1 00:10:29.421 --rc geninfo_unexecuted_blocks=1 00:10:29.421 00:10:29.421 ' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.421 11:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:31.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:31.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.953 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:31.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:31.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.954 11:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:10:31.954 00:10:31.954 --- 10.0.0.2 ping statistics --- 00:10:31.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.954 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:10:31.954 00:10:31.954 --- 10.0.0.1 ping statistics --- 00:10:31.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.954 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3739561 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3739561 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3739561 ']' 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.954 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.954 [2024-11-02 11:22:32.108197] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:10:31.954 [2024-11-02 11:22:32.108293] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.954 [2024-11-02 11:22:32.186520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.954 [2024-11-02 11:22:32.236268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.954 [2024-11-02 11:22:32.236336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.954 [2024-11-02 11:22:32.236353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.954 [2024-11-02 11:22:32.236366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.954 [2024-11-02 11:22:32.236377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.954 [2024-11-02 11:22:32.238089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:31.954 [2024-11-02 11:22:32.238173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:31.954 [2024-11-02 11:22:32.238296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:31.954 [2024-11-02 11:22:32.238301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 [2024-11-02 11:22:32.384992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 Malloc0 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 [2024-11-02 11:22:32.447168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:32.213 { 00:10:32.213 "params": { 00:10:32.213 "name": "Nvme$subsystem", 00:10:32.213 "trtype": "$TEST_TRANSPORT", 00:10:32.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.213 "adrfam": "ipv4", 00:10:32.213 "trsvcid": "$NVMF_PORT", 00:10:32.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.213 "hdgst": ${hdgst:-false}, 00:10:32.213 "ddgst": ${ddgst:-false} 00:10:32.213 }, 00:10:32.213 "method": "bdev_nvme_attach_controller" 00:10:32.213 } 00:10:32.213 EOF 00:10:32.213 )") 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:32.213 11:22:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:32.213 "params": { 00:10:32.213 "name": "Nvme1", 00:10:32.213 "trtype": "tcp", 00:10:32.213 "traddr": "10.0.0.2", 00:10:32.213 "adrfam": "ipv4", 00:10:32.213 "trsvcid": "4420", 00:10:32.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.213 "hdgst": false, 00:10:32.213 "ddgst": false 00:10:32.213 }, 00:10:32.213 "method": "bdev_nvme_attach_controller" 00:10:32.213 }' 00:10:32.213 [2024-11-02 11:22:32.496808] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:10:32.213 [2024-11-02 11:22:32.496872] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739585 ] 00:10:32.213 [2024-11-02 11:22:32.567392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.471 [2024-11-02 11:22:32.619116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.471 [2024-11-02 11:22:32.619168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.471 [2024-11-02 11:22:32.619171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.729 I/O targets: 00:10:32.729 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:32.729 00:10:32.729 00:10:32.729 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.729 http://cunit.sourceforge.net/ 00:10:32.729 00:10:32.729 00:10:32.729 Suite: bdevio tests on: Nvme1n1 00:10:32.729 Test: blockdev write read block ...passed 00:10:32.729 Test: blockdev write zeroes read block ...passed 00:10:32.729 Test: blockdev write zeroes read no split ...passed 00:10:32.729 Test: blockdev write zeroes read split ...passed 00:10:32.986 Test: blockdev write zeroes read split partial ...passed 00:10:32.986 Test: blockdev reset ...[2024-11-02 11:22:33.161240] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:32.986 [2024-11-02 11:22:33.161381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b26ac0 (9): Bad file descriptor 00:10:32.986 [2024-11-02 11:22:33.179499] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:32.986 passed 00:10:32.986 Test: blockdev write read 8 blocks ...passed 00:10:32.986 Test: blockdev write read size > 128k ...passed 00:10:32.986 Test: blockdev write read invalid size ...passed 00:10:32.986 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.986 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.986 Test: blockdev write read max offset ...passed 00:10:32.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.986 Test: blockdev writev readv 8 blocks ...passed 00:10:32.986 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.244 Test: blockdev writev readv block ...passed 00:10:33.244 Test: blockdev writev readv size > 128k ...passed 00:10:33.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.244 Test: blockdev comparev and writev ...[2024-11-02 11:22:33.396082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.396119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.396145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.396163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.396545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.396570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.396592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.396617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.396974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.396998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.397020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.397036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.397411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.397436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.397458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.244 [2024-11-02 11:22:33.397473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:33.244 passed 00:10:33.244 Test: blockdev nvme passthru rw ...passed 00:10:33.244 Test: blockdev nvme passthru vendor specific ...[2024-11-02 11:22:33.481570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.244 [2024-11-02 11:22:33.481596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.481771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.244 [2024-11-02 11:22:33.481795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.481967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.244 [2024-11-02 11:22:33.481990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:33.244 [2024-11-02 11:22:33.482155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.244 [2024-11-02 11:22:33.482179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:33.244 passed 00:10:33.244 Test: blockdev nvme admin passthru ...passed 00:10:33.244 Test: blockdev copy ...passed 00:10:33.244 00:10:33.244 Run Summary: Type Total Ran Passed Failed Inactive 00:10:33.244 suites 1 1 n/a 0 0 00:10:33.244 tests 23 23 23 0 0 00:10:33.244 asserts 152 152 152 0 n/a 00:10:33.244 00:10:33.244 Elapsed time = 1.143 seconds 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.503 rmmod nvme_tcp 00:10:33.503 rmmod nvme_fabrics 00:10:33.503 rmmod nvme_keyring 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3739561 ']' 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3739561 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3739561 ']' 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3739561 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3739561 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3739561' 00:10:33.503 killing process with pid 3739561 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3739561 00:10:33.503 11:22:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3739561 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.761 11:22:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.294 00:10:36.294 real 0m6.510s 00:10:36.294 user 0m10.519s 00:10:36.294 sys 0m2.166s 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.294 ************************************ 00:10:36.294 END TEST nvmf_bdevio 00:10:36.294 ************************************ 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.294 00:10:36.294 real 3m55.310s 00:10:36.294 user 10m12.131s 00:10:36.294 sys 1m8.198s 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.294 ************************************ 00:10:36.294 END TEST nvmf_target_core 00:10:36.294 ************************************ 00:10:36.294 11:22:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.294 11:22:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:36.294 11:22:36 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.294 11:22:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.294 ************************************ 00:10:36.294 START TEST nvmf_target_extra 00:10:36.294 ************************************ 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.294 * Looking for test storage... 00:10:36.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.294 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:36.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.295 --rc genhtml_branch_coverage=1 00:10:36.295 --rc genhtml_function_coverage=1 00:10:36.295 --rc genhtml_legend=1 00:10:36.295 --rc geninfo_all_blocks=1 00:10:36.295 --rc geninfo_unexecuted_blocks=1 00:10:36.295 00:10:36.295 ' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:36.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.295 --rc genhtml_branch_coverage=1 00:10:36.295 --rc genhtml_function_coverage=1 00:10:36.295 --rc genhtml_legend=1 00:10:36.295 --rc geninfo_all_blocks=1 00:10:36.295 --rc geninfo_unexecuted_blocks=1 00:10:36.295 00:10:36.295 ' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:36.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.295 --rc genhtml_branch_coverage=1 00:10:36.295 --rc genhtml_function_coverage=1 00:10:36.295 --rc genhtml_legend=1 00:10:36.295 --rc geninfo_all_blocks=1 00:10:36.295 --rc geninfo_unexecuted_blocks=1 00:10:36.295 00:10:36.295 ' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:36.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.295 --rc genhtml_branch_coverage=1 00:10:36.295 --rc genhtml_function_coverage=1 00:10:36.295 --rc genhtml_legend=1 00:10:36.295 --rc geninfo_all_blocks=1 00:10:36.295 --rc geninfo_unexecuted_blocks=1 00:10:36.295 00:10:36.295 ' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.295 ************************************ 00:10:36.295 START TEST nvmf_example 00:10:36.295 ************************************ 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.295 * Looking for test storage... 00:10:36.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.295 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:36.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.296 --rc genhtml_branch_coverage=1 00:10:36.296 --rc genhtml_function_coverage=1 00:10:36.296 --rc genhtml_legend=1 00:10:36.296 --rc geninfo_all_blocks=1 00:10:36.296 --rc geninfo_unexecuted_blocks=1 00:10:36.296 00:10:36.296 ' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:36.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.296 --rc genhtml_branch_coverage=1 00:10:36.296 --rc genhtml_function_coverage=1 00:10:36.296 --rc genhtml_legend=1 00:10:36.296 --rc geninfo_all_blocks=1 00:10:36.296 --rc geninfo_unexecuted_blocks=1 00:10:36.296 00:10:36.296 ' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:36.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.296 --rc genhtml_branch_coverage=1 00:10:36.296 --rc genhtml_function_coverage=1 00:10:36.296 --rc genhtml_legend=1 00:10:36.296 --rc geninfo_all_blocks=1 00:10:36.296 --rc geninfo_unexecuted_blocks=1 00:10:36.296 00:10:36.296 ' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:36.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.296 --rc genhtml_branch_coverage=1 00:10:36.296 --rc genhtml_function_coverage=1 00:10:36.296 --rc genhtml_legend=1 00:10:36.296 --rc geninfo_all_blocks=1 00:10:36.296 --rc geninfo_unexecuted_blocks=1 00:10:36.296 00:10:36.296 ' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.296 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.297 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.239 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:38.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:38.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:38.240 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:38.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.240 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:10:38.527 00:10:38.527 --- 10.0.0.2 ping statistics --- 00:10:38.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.527 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:38.527 00:10:38.527 --- 10.0.0.1 ping statistics --- 00:10:38.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.527 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3741841 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3741841 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3741841 ']' 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:38.527 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:39.461 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:51.659 Initializing NVMe Controllers 00:10:51.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:51.659 Initialization complete. Launching workers. 00:10:51.659 ======================================================== 00:10:51.660 Latency(us) 00:10:51.660 Device Information : IOPS MiB/s Average min max 00:10:51.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14831.30 57.93 4314.79 772.06 16206.21 00:10:51.660 ======================================================== 00:10:51.660 Total : 14831.30 57.93 4314.79 772.06 16206.21 00:10:51.660 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.660 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.660 rmmod nvme_tcp 00:10:51.660 rmmod nvme_fabrics 00:10:51.660 rmmod nvme_keyring 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3741841 ']' 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3741841 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3741841 ']' 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3741841 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3741841 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3741841' 00:10:51.660 killing process with pid 3741841 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3741841 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3741841 00:10:51.660 nvmf threads initialize successfully 00:10:51.660 bdev subsystem init successfully 00:10:51.660 created a nvmf target service 00:10:51.660 create targets's poll groups done 00:10:51.660 all subsystems of target started 00:10:51.660 nvmf target is running 00:10:51.660 all subsystems of target stopped 00:10:51.660 destroy targets's poll groups done 00:10:51.660 destroyed the nvmf target service 00:10:51.660 bdev subsystem finish successfully 00:10:51.660 nvmf threads destroy successfully 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.660 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.229 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.229 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:52.229 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.229 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 00:10:52.229 real 0m16.067s 00:10:52.229 user 0m45.437s 00:10:52.229 sys 0m3.265s 00:10:52.229 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.229 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 ************************************ 00:10:52.229 END TEST nvmf_example 00:10:52.229 ************************************ 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.230 ************************************ 00:10:52.230 START TEST nvmf_filesystem 00:10:52.230 ************************************ 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.230 * Looking for test storage... 00:10:52.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:52.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.230 --rc genhtml_branch_coverage=1 00:10:52.230 --rc genhtml_function_coverage=1 00:10:52.230 --rc genhtml_legend=1 00:10:52.230 --rc geninfo_all_blocks=1 00:10:52.230 --rc geninfo_unexecuted_blocks=1 00:10:52.230 00:10:52.230 ' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:52.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.230 --rc genhtml_branch_coverage=1 00:10:52.230 --rc genhtml_function_coverage=1 00:10:52.230 --rc genhtml_legend=1 00:10:52.230 --rc geninfo_all_blocks=1 00:10:52.230 --rc geninfo_unexecuted_blocks=1 00:10:52.230 00:10:52.230 ' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:52.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.230 --rc genhtml_branch_coverage=1 00:10:52.230 --rc genhtml_function_coverage=1 00:10:52.230 --rc genhtml_legend=1 00:10:52.230 --rc geninfo_all_blocks=1 00:10:52.230 --rc geninfo_unexecuted_blocks=1 00:10:52.230 00:10:52.230 ' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:52.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.230 --rc genhtml_branch_coverage=1 00:10:52.230 --rc genhtml_function_coverage=1 00:10:52.230 --rc genhtml_legend=1 00:10:52.230 --rc geninfo_all_blocks=1 00:10:52.230 --rc geninfo_unexecuted_blocks=1 00:10:52.230 00:10:52.230 ' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:52.230 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:52.231 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:52.231 #define SPDK_CONFIG_H 00:10:52.231 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:52.231 #define SPDK_CONFIG_APPS 1 00:10:52.231 #define SPDK_CONFIG_ARCH native 00:10:52.231 #undef SPDK_CONFIG_ASAN 00:10:52.231 #undef SPDK_CONFIG_AVAHI 00:10:52.231 #undef SPDK_CONFIG_CET 00:10:52.231 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:52.231 #define SPDK_CONFIG_COVERAGE 1 00:10:52.231 #define SPDK_CONFIG_CROSS_PREFIX 00:10:52.231 #undef SPDK_CONFIG_CRYPTO 00:10:52.231 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:52.231 #undef SPDK_CONFIG_CUSTOMOCF 00:10:52.231 #undef SPDK_CONFIG_DAOS 00:10:52.231 #define SPDK_CONFIG_DAOS_DIR 00:10:52.231 #define SPDK_CONFIG_DEBUG 1 00:10:52.231 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:52.231 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:52.231 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:52.231 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.231 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:52.231 #undef SPDK_CONFIG_DPDK_UADK 00:10:52.231 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.231 #define SPDK_CONFIG_EXAMPLES 1 00:10:52.231 #undef SPDK_CONFIG_FC 00:10:52.231 #define SPDK_CONFIG_FC_PATH 00:10:52.231 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:52.231 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:52.231 #define SPDK_CONFIG_FSDEV 1 00:10:52.231 #undef SPDK_CONFIG_FUSE 00:10:52.231 #undef SPDK_CONFIG_FUZZER 00:10:52.231 #define SPDK_CONFIG_FUZZER_LIB 00:10:52.231 #undef SPDK_CONFIG_GOLANG 00:10:52.231 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:52.231 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:52.231 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:52.231 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:52.231 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:52.231 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:52.231 #undef SPDK_CONFIG_HAVE_LZ4 00:10:52.231 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:52.231 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:52.231 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:52.231 #define SPDK_CONFIG_IDXD 1 00:10:52.231 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:52.231 #undef SPDK_CONFIG_IPSEC_MB 00:10:52.231 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:52.231 #define SPDK_CONFIG_ISAL 1 00:10:52.231 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:52.231 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:52.231 #define SPDK_CONFIG_LIBDIR 00:10:52.231 #undef SPDK_CONFIG_LTO 00:10:52.231 #define SPDK_CONFIG_MAX_LCORES 128 00:10:52.231 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:52.231 #define SPDK_CONFIG_NVME_CUSE 1 00:10:52.231 #undef SPDK_CONFIG_OCF 00:10:52.231 #define SPDK_CONFIG_OCF_PATH 00:10:52.231 #define SPDK_CONFIG_OPENSSL_PATH 00:10:52.231 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:52.231 #define SPDK_CONFIG_PGO_DIR 00:10:52.231 #undef SPDK_CONFIG_PGO_USE 00:10:52.231 #define SPDK_CONFIG_PREFIX /usr/local 00:10:52.231 #undef SPDK_CONFIG_RAID5F 00:10:52.231 #undef SPDK_CONFIG_RBD 00:10:52.231 #define SPDK_CONFIG_RDMA 1 00:10:52.231 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:52.231 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:52.231 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:52.231 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:52.231 #define SPDK_CONFIG_SHARED 1 00:10:52.231 #undef SPDK_CONFIG_SMA 00:10:52.231 #define SPDK_CONFIG_TESTS 1 00:10:52.231 #undef SPDK_CONFIG_TSAN 00:10:52.231 #define SPDK_CONFIG_UBLK 1 00:10:52.231 #define SPDK_CONFIG_UBSAN 1 00:10:52.232 #undef SPDK_CONFIG_UNIT_TESTS 00:10:52.232 #undef SPDK_CONFIG_URING 00:10:52.232 #define SPDK_CONFIG_URING_PATH 00:10:52.232 #undef SPDK_CONFIG_URING_ZNS 00:10:52.232 #undef SPDK_CONFIG_USDT 00:10:52.232 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:52.232 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:52.232 #define SPDK_CONFIG_VFIO_USER 1 00:10:52.232 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:52.232 #define SPDK_CONFIG_VHOST 1 00:10:52.232 #define SPDK_CONFIG_VIRTIO 1 00:10:52.232 #undef SPDK_CONFIG_VTUNE 00:10:52.232 #define SPDK_CONFIG_VTUNE_DIR 00:10:52.232 #define SPDK_CONFIG_WERROR 1 00:10:52.232 #define SPDK_CONFIG_WPDK_DIR 00:10:52.232 #undef SPDK_CONFIG_XNVME 00:10:52.232 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:52.232 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:52.232 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.232 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.232 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.232 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.232 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:52.494 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.495 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3743546 ]] 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3743546 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.eIYiFq 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.eIYiFq/tests/target /tmp/spdk.eIYiFq 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.496 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=52526002176 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=9462525952 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30992683008 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1581056 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:52.497 * Looking for test storage... 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=52526002176 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=11677118464 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.497 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:52.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.497 --rc genhtml_branch_coverage=1 00:10:52.497 --rc genhtml_function_coverage=1 00:10:52.498 --rc genhtml_legend=1 00:10:52.498 --rc geninfo_all_blocks=1 00:10:52.498 --rc geninfo_unexecuted_blocks=1 00:10:52.498 00:10:52.498 ' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.498 --rc genhtml_branch_coverage=1 00:10:52.498 --rc genhtml_function_coverage=1 00:10:52.498 --rc genhtml_legend=1 00:10:52.498 --rc geninfo_all_blocks=1 00:10:52.498 --rc geninfo_unexecuted_blocks=1 00:10:52.498 00:10:52.498 ' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.498 --rc genhtml_branch_coverage=1 00:10:52.498 --rc genhtml_function_coverage=1 00:10:52.498 --rc genhtml_legend=1 00:10:52.498 --rc geninfo_all_blocks=1 00:10:52.498 --rc geninfo_unexecuted_blocks=1 00:10:52.498 00:10:52.498 ' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:52.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.498 --rc genhtml_branch_coverage=1 00:10:52.498 --rc genhtml_function_coverage=1 00:10:52.498 --rc genhtml_legend=1 00:10:52.498 --rc geninfo_all_blocks=1 00:10:52.498 --rc geninfo_unexecuted_blocks=1 00:10:52.498 00:10:52.498 ' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.498 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.030 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:55.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:55.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:55.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:55.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.031 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:10:55.031 00:10:55.031 --- 10.0.0.2 ping statistics --- 00:10:55.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.031 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:10:55.031 00:10:55.031 --- 10.0.0.1 ping statistics --- 00:10:55.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.031 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.031 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.032 ************************************ 00:10:55.032 START TEST nvmf_filesystem_no_in_capsule 00:10:55.032 ************************************ 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3745196 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3745196 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3745196 ']' 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.032 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.032 [2024-11-02 11:22:55.256084] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:10:55.032 [2024-11-02 11:22:55.256183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.032 [2024-11-02 11:22:55.339110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.032 [2024-11-02 11:22:55.391350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.032 [2024-11-02 11:22:55.391411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.032 [2024-11-02 11:22:55.391427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.032 [2024-11-02 11:22:55.391440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.032 [2024-11-02 11:22:55.391451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.032 [2024-11-02 11:22:55.394283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.032 [2024-11-02 11:22:55.394316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.032 [2024-11-02 11:22:55.394431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.032 [2024-11-02 11:22:55.394434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.290 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.291 [2024-11-02 11:22:55.545343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.291 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.549 Malloc1 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.549 [2024-11-02 11:22:55.732289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.549 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:55.549 { 00:10:55.549 "name": "Malloc1", 00:10:55.549 "aliases": [ 00:10:55.549 "52047d06-a332-4ed7-b8b3-8080524612ce" 00:10:55.549 ], 00:10:55.549 "product_name": "Malloc disk", 00:10:55.549 "block_size": 512, 00:10:55.549 "num_blocks": 1048576, 00:10:55.549 "uuid": "52047d06-a332-4ed7-b8b3-8080524612ce", 00:10:55.549 "assigned_rate_limits": { 00:10:55.549 "rw_ios_per_sec": 0, 00:10:55.549 "rw_mbytes_per_sec": 0, 00:10:55.549 "r_mbytes_per_sec": 0, 00:10:55.549 "w_mbytes_per_sec": 0 00:10:55.549 }, 00:10:55.549 "claimed": true, 00:10:55.549 "claim_type": "exclusive_write", 00:10:55.549 "zoned": false, 00:10:55.549 "supported_io_types": { 00:10:55.549 "read": true, 00:10:55.549 "write": true, 00:10:55.549 "unmap": true, 00:10:55.549 "flush": true, 00:10:55.549 "reset": true, 00:10:55.549 "nvme_admin": false, 00:10:55.549 "nvme_io": false, 00:10:55.550 "nvme_io_md": false, 00:10:55.550 "write_zeroes": true, 00:10:55.550 "zcopy": true, 00:10:55.550 "get_zone_info": false, 00:10:55.550 "zone_management": false, 00:10:55.550 "zone_append": false, 00:10:55.550 "compare": false, 00:10:55.550 "compare_and_write": false, 00:10:55.550 "abort": true, 00:10:55.550 "seek_hole": false, 00:10:55.550 "seek_data": false, 00:10:55.550 "copy": true, 00:10:55.550 "nvme_iov_md": false 00:10:55.550 }, 00:10:55.550 "memory_domains": [ 00:10:55.550 { 00:10:55.550 "dma_device_id": "system", 00:10:55.550 "dma_device_type": 1 00:10:55.550 }, 00:10:55.550 { 00:10:55.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.550 "dma_device_type": 2 00:10:55.550 } 00:10:55.550 ], 00:10:55.550 "driver_specific": {} 00:10:55.550 } 00:10:55.550 ]' 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:55.550 11:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.483 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.483 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:56.483 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.483 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:56.483 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:58.380 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:58.381 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:58.381 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:58.381 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:59.314 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.246 ************************************ 00:11:00.246 START TEST filesystem_ext4 00:11:00.246 ************************************ 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:00.246 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:00.246 mke2fs 1.47.0 (5-Feb-2023) 00:11:00.246 Discarding device blocks: 0/522240 done 00:11:00.246 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:00.246 Filesystem UUID: 6cd8d2e7-968c-44be-ad6e-329c471aa470 00:11:00.246 Superblock backups stored on blocks: 00:11:00.246 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:00.246 00:11:00.246 Allocating group tables: 0/64 done 00:11:00.246 Writing inode tables: 0/64 done 00:11:01.618 Creating journal (8192 blocks): done 00:11:03.631 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:03.631 00:11:03.631 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:03.631 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.892 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.892 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:08.892 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.892 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:08.892 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:08.892 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3745196 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.151 00:11:09.151 real 0m8.870s 00:11:09.151 user 0m0.021s 00:11:09.151 sys 0m0.068s 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.151 ************************************ 00:11:09.151 END TEST filesystem_ext4 00:11:09.151 ************************************ 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.151 ************************************ 00:11:09.151 START TEST filesystem_btrfs 00:11:09.151 ************************************ 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:09.151 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.418 btrfs-progs v6.8.1 00:11:09.419 See https://btrfs.readthedocs.io for more information. 00:11:09.419 00:11:09.419 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.419 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.419 this does not affect your deployments: 00:11:09.419 - DUP for metadata (-m dup) 00:11:09.419 - enabled no-holes (-O no-holes) 00:11:09.419 - enabled free-space-tree (-R free-space-tree) 00:11:09.419 00:11:09.419 Label: (null) 00:11:09.419 UUID: c6ec5603-388e-4877-bfe5-f7d8e63c9481 00:11:09.419 Node size: 16384 00:11:09.419 Sector size: 4096 (CPU page size: 4096) 00:11:09.419 Filesystem size: 510.00MiB 00:11:09.419 Block group profiles: 00:11:09.419 Data: single 8.00MiB 00:11:09.419 Metadata: DUP 32.00MiB 00:11:09.419 System: DUP 8.00MiB 00:11:09.419 SSD detected: yes 00:11:09.419 Zoned device: no 00:11:09.419 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.419 Checksum: crc32c 00:11:09.419 Number of devices: 1 00:11:09.419 Devices: 00:11:09.419 ID SIZE PATH 00:11:09.419 1 510.00MiB /dev/nvme0n1p1 00:11:09.419 00:11:09.419 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:09.419 11:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.354 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3745196 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.355 00:11:10.355 real 0m1.139s 00:11:10.355 user 0m0.035s 00:11:10.355 sys 0m0.087s 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.355 ************************************ 00:11:10.355 END TEST filesystem_btrfs 00:11:10.355 ************************************ 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.355 ************************************ 00:11:10.355 START TEST filesystem_xfs 00:11:10.355 ************************************ 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.355 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.921 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.921 = sectsz=512 attr=2, projid32bit=1 00:11:10.921 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.921 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.921 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.921 = sunit=0 swidth=0 blks 00:11:10.921 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.921 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.921 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.921 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.854 Discarding blocks...Done. 00:11:11.854 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:11.854 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3745196 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.753 00:11:13.753 real 0m3.311s 00:11:13.753 user 0m0.018s 00:11:13.753 sys 0m0.066s 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.753 ************************************ 00:11:13.753 END TEST filesystem_xfs 00:11:13.753 ************************************ 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:13.753 11:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.753 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3745196 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3745196 ']' 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3745196 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3745196 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3745196' 00:11:13.754 killing process with pid 3745196 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3745196 00:11:13.754 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3745196 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:14.321 00:11:14.321 real 0m19.310s 00:11:14.321 user 1m14.845s 00:11:14.321 sys 0m2.342s 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.321 ************************************ 00:11:14.321 END TEST nvmf_filesystem_no_in_capsule 00:11:14.321 ************************************ 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.321 ************************************ 00:11:14.321 START TEST nvmf_filesystem_in_capsule 00:11:14.321 ************************************ 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3747698 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3747698 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3747698 ']' 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.321 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.321 [2024-11-02 11:23:14.618730] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:11:14.321 [2024-11-02 11:23:14.618815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.321 [2024-11-02 11:23:14.692548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.580 [2024-11-02 11:23:14.743376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.580 [2024-11-02 11:23:14.743429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.580 [2024-11-02 11:23:14.743457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.580 [2024-11-02 11:23:14.743468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.580 [2024-11-02 11:23:14.743478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.580 [2024-11-02 11:23:14.744959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.580 [2024-11-02 11:23:14.745024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.580 [2024-11-02 11:23:14.745089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.580 [2024-11-02 11:23:14.745092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.580 [2024-11-02 11:23:14.889020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.580 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.839 Malloc1 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.839 [2024-11-02 11:23:15.077321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:14.839 { 00:11:14.839 "name": "Malloc1", 00:11:14.839 "aliases": [ 00:11:14.839 "adba55ae-e26b-491b-aa11-caed83b287d8" 00:11:14.839 ], 00:11:14.839 "product_name": "Malloc disk", 00:11:14.839 "block_size": 512, 00:11:14.839 "num_blocks": 1048576, 00:11:14.839 "uuid": "adba55ae-e26b-491b-aa11-caed83b287d8", 00:11:14.839 "assigned_rate_limits": { 00:11:14.839 "rw_ios_per_sec": 0, 00:11:14.839 "rw_mbytes_per_sec": 0, 00:11:14.839 "r_mbytes_per_sec": 0, 00:11:14.839 "w_mbytes_per_sec": 0 00:11:14.839 }, 00:11:14.839 "claimed": true, 00:11:14.839 "claim_type": "exclusive_write", 00:11:14.839 "zoned": false, 00:11:14.839 "supported_io_types": { 00:11:14.839 "read": true, 00:11:14.839 "write": true, 00:11:14.839 "unmap": true, 00:11:14.839 "flush": true, 00:11:14.839 "reset": true, 00:11:14.839 "nvme_admin": false, 00:11:14.839 "nvme_io": false, 00:11:14.839 "nvme_io_md": false, 00:11:14.839 "write_zeroes": true, 00:11:14.839 "zcopy": true, 00:11:14.839 "get_zone_info": false, 00:11:14.839 "zone_management": false, 00:11:14.839 "zone_append": false, 00:11:14.839 "compare": false, 00:11:14.839 "compare_and_write": false, 00:11:14.839 "abort": true, 00:11:14.839 "seek_hole": false, 00:11:14.839 "seek_data": false, 00:11:14.839 "copy": true, 00:11:14.839 "nvme_iov_md": false 00:11:14.839 }, 00:11:14.839 "memory_domains": [ 00:11:14.839 { 00:11:14.839 "dma_device_id": "system", 00:11:14.839 "dma_device_type": 1 00:11:14.839 }, 00:11:14.839 { 00:11:14.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.839 "dma_device_type": 2 00:11:14.839 } 00:11:14.839 ], 00:11:14.839 "driver_specific": {} 00:11:14.839 } 00:11:14.839 ]' 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:14.839 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.773 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.773 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:15.773 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.773 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:15.773 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:17.671 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:17.929 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:18.187 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:19.119 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:19.119 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:19.119 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:19.119 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.119 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.385 ************************************ 00:11:19.385 START TEST filesystem_in_capsule_ext4 00:11:19.385 ************************************ 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:19.385 mke2fs 1.47.0 (5-Feb-2023) 00:11:19.385 Discarding device blocks: 0/522240 done 00:11:19.385 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:19.385 Filesystem UUID: fabe71eb-e098-4b4b-9a43-322343235784 00:11:19.385 Superblock backups stored on blocks: 00:11:19.385 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:19.385 00:11:19.385 Allocating group tables: 0/64 done 00:11:19.385 Writing inode tables: 0/64 done 00:11:19.385 Creating journal (8192 blocks): done 00:11:19.385 Writing superblocks and filesystem accounting information: 0/64 done 00:11:19.385 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:19.385 11:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3747698 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.948 00:11:25.948 real 0m6.308s 00:11:25.948 user 0m0.014s 00:11:25.948 sys 0m0.063s 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 ************************************ 00:11:25.948 END TEST filesystem_in_capsule_ext4 00:11:25.948 ************************************ 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 ************************************ 00:11:25.948 START TEST filesystem_in_capsule_btrfs 00:11:25.948 ************************************ 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:25.948 btrfs-progs v6.8.1 00:11:25.948 See https://btrfs.readthedocs.io for more information. 00:11:25.948 00:11:25.948 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:25.948 NOTE: several default settings have changed in version 5.15, please make sure 00:11:25.948 this does not affect your deployments: 00:11:25.948 - DUP for metadata (-m dup) 00:11:25.948 - enabled no-holes (-O no-holes) 00:11:25.948 - enabled free-space-tree (-R free-space-tree) 00:11:25.948 00:11:25.948 Label: (null) 00:11:25.948 UUID: 9e554908-3111-4f79-bb6a-591348dc63b2 00:11:25.948 Node size: 16384 00:11:25.948 Sector size: 4096 (CPU page size: 4096) 00:11:25.948 Filesystem size: 510.00MiB 00:11:25.948 Block group profiles: 00:11:25.948 Data: single 8.00MiB 00:11:25.948 Metadata: DUP 32.00MiB 00:11:25.948 System: DUP 8.00MiB 00:11:25.948 SSD detected: yes 00:11:25.948 Zoned device: no 00:11:25.948 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:25.948 Checksum: crc32c 00:11:25.948 Number of devices: 1 00:11:25.948 Devices: 00:11:25.948 ID SIZE PATH 00:11:25.948 1 510.00MiB /dev/nvme0n1p1 00:11:25.948 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:25.948 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3747698 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.883 00:11:26.883 real 0m1.113s 00:11:26.883 user 0m0.013s 00:11:26.883 sys 0m0.103s 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.883 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 ************************************ 00:11:26.883 END TEST filesystem_in_capsule_btrfs 00:11:26.883 ************************************ 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.883 ************************************ 00:11:26.883 START TEST filesystem_in_capsule_xfs 00:11:26.883 ************************************ 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:26.883 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:26.883 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:26.883 = sectsz=512 attr=2, projid32bit=1 00:11:26.883 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:26.883 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:26.883 data = bsize=4096 blocks=130560, imaxpct=25 00:11:26.883 = sunit=0 swidth=0 blks 00:11:26.883 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:26.883 log =internal log bsize=4096 blocks=16384, version=2 00:11:26.883 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:26.883 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:27.448 Discarding blocks...Done. 00:11:27.448 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:27.448 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3747698 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.404 00:11:29.404 real 0m2.660s 00:11:29.404 user 0m0.014s 00:11:29.404 sys 0m0.060s 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.404 ************************************ 00:11:29.404 END TEST filesystem_in_capsule_xfs 00:11:29.404 ************************************ 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:29.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3747698 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3747698 ']' 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3747698 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3747698 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3747698' 00:11:29.673 killing process with pid 3747698 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3747698 00:11:29.673 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3747698 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:29.936 00:11:29.936 real 0m15.741s 00:11:29.936 user 1m0.955s 00:11:29.936 sys 0m1.981s 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.936 ************************************ 00:11:29.936 END TEST nvmf_filesystem_in_capsule 00:11:29.936 ************************************ 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.936 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.936 rmmod nvme_tcp 00:11:30.195 rmmod nvme_fabrics 00:11:30.195 rmmod nvme_keyring 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.195 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.097 00:11:32.097 real 0m39.974s 00:11:32.097 user 2m16.878s 00:11:32.097 sys 0m6.144s 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.097 ************************************ 00:11:32.097 END TEST nvmf_filesystem 00:11:32.097 ************************************ 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.097 ************************************ 00:11:32.097 START TEST nvmf_target_discovery 00:11:32.097 ************************************ 00:11:32.097 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:32.356 * Looking for test storage... 00:11:32.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:32.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.356 --rc genhtml_branch_coverage=1 00:11:32.356 --rc genhtml_function_coverage=1 00:11:32.356 --rc genhtml_legend=1 00:11:32.356 --rc geninfo_all_blocks=1 00:11:32.356 --rc geninfo_unexecuted_blocks=1 00:11:32.356 00:11:32.356 ' 00:11:32.356 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:32.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.356 --rc genhtml_branch_coverage=1 00:11:32.357 --rc genhtml_function_coverage=1 00:11:32.357 --rc genhtml_legend=1 00:11:32.357 --rc geninfo_all_blocks=1 00:11:32.357 --rc geninfo_unexecuted_blocks=1 00:11:32.357 00:11:32.357 ' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:32.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.357 --rc genhtml_branch_coverage=1 00:11:32.357 --rc genhtml_function_coverage=1 00:11:32.357 --rc genhtml_legend=1 00:11:32.357 --rc geninfo_all_blocks=1 00:11:32.357 --rc geninfo_unexecuted_blocks=1 00:11:32.357 00:11:32.357 ' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:32.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.357 --rc genhtml_branch_coverage=1 00:11:32.357 --rc genhtml_function_coverage=1 00:11:32.357 --rc genhtml_legend=1 00:11:32.357 --rc geninfo_all_blocks=1 00:11:32.357 --rc geninfo_unexecuted_blocks=1 00:11:32.357 00:11:32.357 ' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.357 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.261 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.521 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.521 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:11:34.521 00:11:34.521 --- 10.0.0.2 ping statistics --- 00:11:34.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.521 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:34.521 00:11:34.521 --- 10.0.0.1 ping statistics --- 00:11:34.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.521 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3751715 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3751715 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3751715 ']' 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.521 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.521 [2024-11-02 11:23:34.897510] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:11:34.521 [2024-11-02 11:23:34.897613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.780 [2024-11-02 11:23:34.979467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.780 [2024-11-02 11:23:35.029962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.780 [2024-11-02 11:23:35.030021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.780 [2024-11-02 11:23:35.030043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.780 [2024-11-02 11:23:35.030062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.780 [2024-11-02 11:23:35.030076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.780 [2024-11-02 11:23:35.031741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.780 [2024-11-02 11:23:35.031807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.780 [2024-11-02 11:23:35.031831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.780 [2024-11-02 11:23:35.031834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.780 [2024-11-02 11:23:35.173693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.780 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 Null1 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 [2024-11-02 11:23:35.214008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 Null2 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 Null3 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 Null4 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.038 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:35.296 00:11:35.297 Discovery Log Number of Records 6, Generation counter 6 00:11:35.297 =====Discovery Log Entry 0====== 00:11:35.297 trtype: tcp 00:11:35.297 adrfam: ipv4 00:11:35.297 subtype: current discovery subsystem 00:11:35.297 treq: not required 00:11:35.297 portid: 0 00:11:35.297 trsvcid: 4420 00:11:35.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.297 traddr: 10.0.0.2 00:11:35.297 eflags: explicit discovery connections, duplicate discovery information 00:11:35.297 sectype: none 00:11:35.297 =====Discovery Log Entry 1====== 00:11:35.297 trtype: tcp 00:11:35.297 adrfam: ipv4 00:11:35.297 subtype: nvme subsystem 00:11:35.297 treq: not required 00:11:35.297 portid: 0 00:11:35.297 trsvcid: 4420 00:11:35.297 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:35.297 traddr: 10.0.0.2 00:11:35.297 eflags: none 00:11:35.297 sectype: none 00:11:35.297 =====Discovery Log Entry 2====== 00:11:35.297 trtype: tcp 00:11:35.297 adrfam: ipv4 00:11:35.297 subtype: nvme subsystem 00:11:35.297 treq: not required 00:11:35.297 portid: 0 00:11:35.297 trsvcid: 4420 00:11:35.297 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:35.297 traddr: 10.0.0.2 00:11:35.297 eflags: none 00:11:35.297 sectype: none 00:11:35.297 =====Discovery Log Entry 3====== 00:11:35.297 trtype: tcp 00:11:35.297 adrfam: ipv4 00:11:35.297 subtype: nvme subsystem 00:11:35.297 treq: not required 00:11:35.297 portid: 0 00:11:35.297 trsvcid: 4420 00:11:35.297 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:35.297 traddr: 10.0.0.2 00:11:35.297 eflags: none 00:11:35.297 sectype: none 00:11:35.297 =====Discovery Log Entry 4====== 00:11:35.297 trtype: tcp 00:11:35.297 adrfam: ipv4 00:11:35.297 subtype: nvme subsystem 00:11:35.297 treq: not required 00:11:35.297 portid: 0 00:11:35.297 trsvcid: 4420 00:11:35.297 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:35.297 traddr: 10.0.0.2 00:11:35.297 eflags: none 00:11:35.297 sectype: none 00:11:35.297 =====Discovery Log Entry 5====== 00:11:35.297 trtype: tcp 00:11:35.297 adrfam: ipv4 00:11:35.297 subtype: discovery subsystem referral 00:11:35.297 treq: not required 00:11:35.297 portid: 0 00:11:35.297 trsvcid: 4430 00:11:35.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.297 traddr: 10.0.0.2 00:11:35.297 eflags: none 00:11:35.297 sectype: none 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:35.297 Perform nvmf subsystem discovery via RPC 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 [ 00:11:35.297 { 00:11:35.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.297 "subtype": "Discovery", 00:11:35.297 "listen_addresses": [ 00:11:35.297 { 00:11:35.297 "trtype": "TCP", 00:11:35.297 "adrfam": "IPv4", 00:11:35.297 "traddr": "10.0.0.2", 00:11:35.297 "trsvcid": "4420" 00:11:35.297 } 00:11:35.297 ], 00:11:35.297 "allow_any_host": true, 00:11:35.297 "hosts": [] 00:11:35.297 }, 00:11:35.297 { 00:11:35.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.297 "subtype": "NVMe", 00:11:35.297 "listen_addresses": [ 00:11:35.297 { 00:11:35.297 "trtype": "TCP", 00:11:35.297 "adrfam": "IPv4", 00:11:35.297 "traddr": "10.0.0.2", 00:11:35.297 "trsvcid": "4420" 00:11:35.297 } 00:11:35.297 ], 00:11:35.297 "allow_any_host": true, 00:11:35.297 "hosts": [], 00:11:35.297 "serial_number": "SPDK00000000000001", 00:11:35.297 "model_number": "SPDK bdev Controller", 00:11:35.297 "max_namespaces": 32, 00:11:35.297 "min_cntlid": 1, 00:11:35.297 "max_cntlid": 65519, 00:11:35.297 "namespaces": [ 00:11:35.297 { 00:11:35.297 "nsid": 1, 00:11:35.297 "bdev_name": "Null1", 00:11:35.297 "name": "Null1", 00:11:35.297 "nguid": "BECABDE150C745ECA281338160D09842", 00:11:35.297 "uuid": "becabde1-50c7-45ec-a281-338160d09842" 00:11:35.297 } 00:11:35.297 ] 00:11:35.297 }, 00:11:35.297 { 00:11:35.297 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:35.297 "subtype": "NVMe", 00:11:35.297 "listen_addresses": [ 00:11:35.297 { 00:11:35.297 "trtype": "TCP", 00:11:35.297 "adrfam": "IPv4", 00:11:35.297 "traddr": "10.0.0.2", 00:11:35.297 "trsvcid": "4420" 00:11:35.297 } 00:11:35.297 ], 00:11:35.297 "allow_any_host": true, 00:11:35.297 "hosts": [], 00:11:35.297 "serial_number": "SPDK00000000000002", 00:11:35.297 "model_number": "SPDK bdev Controller", 00:11:35.297 "max_namespaces": 32, 00:11:35.297 "min_cntlid": 1, 00:11:35.297 "max_cntlid": 65519, 00:11:35.297 "namespaces": [ 00:11:35.297 { 00:11:35.297 "nsid": 1, 00:11:35.297 "bdev_name": "Null2", 00:11:35.297 "name": "Null2", 00:11:35.297 "nguid": "1233B7E108FB4105BCD8B5DAD1602D03", 00:11:35.297 "uuid": "1233b7e1-08fb-4105-bcd8-b5dad1602d03" 00:11:35.297 } 00:11:35.297 ] 00:11:35.297 }, 00:11:35.297 { 00:11:35.297 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:35.297 "subtype": "NVMe", 00:11:35.297 "listen_addresses": [ 00:11:35.297 { 00:11:35.297 "trtype": "TCP", 00:11:35.297 "adrfam": "IPv4", 00:11:35.297 "traddr": "10.0.0.2", 00:11:35.297 "trsvcid": "4420" 00:11:35.297 } 00:11:35.297 ], 00:11:35.297 "allow_any_host": true, 00:11:35.297 "hosts": [], 00:11:35.297 "serial_number": "SPDK00000000000003", 00:11:35.297 "model_number": "SPDK bdev Controller", 00:11:35.297 "max_namespaces": 32, 00:11:35.297 "min_cntlid": 1, 00:11:35.297 "max_cntlid": 65519, 00:11:35.297 "namespaces": [ 00:11:35.297 { 00:11:35.297 "nsid": 1, 00:11:35.297 "bdev_name": "Null3", 00:11:35.297 "name": "Null3", 00:11:35.297 "nguid": "D8C225EFBC8F45ACA771AB1C04D0C854", 00:11:35.297 "uuid": "d8c225ef-bc8f-45ac-a771-ab1c04d0c854" 00:11:35.297 } 00:11:35.297 ] 00:11:35.297 }, 00:11:35.297 { 00:11:35.297 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:35.297 "subtype": "NVMe", 00:11:35.297 "listen_addresses": [ 00:11:35.297 { 00:11:35.297 "trtype": "TCP", 00:11:35.297 "adrfam": "IPv4", 00:11:35.297 "traddr": "10.0.0.2", 00:11:35.297 "trsvcid": "4420" 00:11:35.297 } 00:11:35.297 ], 00:11:35.297 "allow_any_host": true, 00:11:35.297 "hosts": [], 00:11:35.297 "serial_number": "SPDK00000000000004", 00:11:35.297 "model_number": "SPDK bdev Controller", 00:11:35.297 "max_namespaces": 32, 00:11:35.297 "min_cntlid": 1, 00:11:35.297 "max_cntlid": 65519, 00:11:35.297 "namespaces": [ 00:11:35.297 { 00:11:35.297 "nsid": 1, 00:11:35.297 "bdev_name": "Null4", 00:11:35.297 "name": "Null4", 00:11:35.297 "nguid": "71BA0D1F1DA94963BC4774D941B525EB", 00:11:35.297 "uuid": "71ba0d1f-1da9-4963-bc47-74d941b525eb" 00:11:35.297 } 00:11:35.297 ] 00:11:35.297 } 00:11:35.297 ] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.297 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.298 rmmod nvme_tcp 00:11:35.298 rmmod nvme_fabrics 00:11:35.298 rmmod nvme_keyring 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3751715 ']' 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3751715 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3751715 ']' 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3751715 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:35.298 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3751715 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3751715' 00:11:35.557 killing process with pid 3751715 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3751715 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3751715 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.557 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.816 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.816 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.816 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.816 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.816 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.718 00:11:37.718 real 0m5.513s 00:11:37.718 user 0m4.626s 00:11:37.718 sys 0m1.866s 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.718 ************************************ 00:11:37.718 END TEST nvmf_target_discovery 00:11:37.718 ************************************ 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.718 ************************************ 00:11:37.718 START TEST nvmf_referrals 00:11:37.718 ************************************ 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:37.718 * Looking for test storage... 00:11:37.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.718 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.977 --rc genhtml_branch_coverage=1 00:11:37.977 --rc genhtml_function_coverage=1 00:11:37.977 --rc genhtml_legend=1 00:11:37.977 --rc geninfo_all_blocks=1 00:11:37.977 --rc geninfo_unexecuted_blocks=1 00:11:37.977 00:11:37.977 ' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.977 --rc genhtml_branch_coverage=1 00:11:37.977 --rc genhtml_function_coverage=1 00:11:37.977 --rc genhtml_legend=1 00:11:37.977 --rc geninfo_all_blocks=1 00:11:37.977 --rc geninfo_unexecuted_blocks=1 00:11:37.977 00:11:37.977 ' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.977 --rc genhtml_branch_coverage=1 00:11:37.977 --rc genhtml_function_coverage=1 00:11:37.977 --rc genhtml_legend=1 00:11:37.977 --rc geninfo_all_blocks=1 00:11:37.977 --rc geninfo_unexecuted_blocks=1 00:11:37.977 00:11:37.977 ' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.977 --rc genhtml_branch_coverage=1 00:11:37.977 --rc genhtml_function_coverage=1 00:11:37.977 --rc genhtml_legend=1 00:11:37.977 --rc geninfo_all_blocks=1 00:11:37.977 --rc geninfo_unexecuted_blocks=1 00:11:37.977 00:11:37.977 ' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.977 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.978 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:39.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:39.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:39.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:39.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.881 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:11:39.882 00:11:39.882 --- 10.0.0.2 ping statistics --- 00:11:39.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.882 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:11:39.882 00:11:39.882 --- 10.0.0.1 ping statistics --- 00:11:39.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.882 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.882 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3753810 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3753810 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3753810 ']' 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.140 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.140 [2024-11-02 11:23:40.334229] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:11:40.140 [2024-11-02 11:23:40.334329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.140 [2024-11-02 11:23:40.408874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.140 [2024-11-02 11:23:40.455906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.140 [2024-11-02 11:23:40.455960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.140 [2024-11-02 11:23:40.455982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.140 [2024-11-02 11:23:40.455998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.140 [2024-11-02 11:23:40.456014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.140 [2024-11-02 11:23:40.457695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.140 [2024-11-02 11:23:40.457745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.140 [2024-11-02 11:23:40.457802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.140 [2024-11-02 11:23:40.457805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.398 [2024-11-02 11:23:40.608775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.398 [2024-11-02 11:23:40.621018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:40.398 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.399 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.657 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.657 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.915 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.172 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.430 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:41.687 11:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:41.687 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:41.687 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:41.687 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:41.687 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:41.687 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:41.687 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.688 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.945 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:41.945 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.945 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:41.945 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.945 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.945 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:42.203 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.461 rmmod nvme_tcp 00:11:42.461 rmmod nvme_fabrics 00:11:42.461 rmmod nvme_keyring 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3753810 ']' 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3753810 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3753810 ']' 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3753810 00:11:42.461 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3753810 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3753810' 00:11:42.462 killing process with pid 3753810 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3753810 00:11:42.462 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3753810 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.720 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.622 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.622 00:11:44.622 real 0m6.957s 00:11:44.622 user 0m11.454s 00:11:44.622 sys 0m2.197s 00:11:44.622 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.622 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.622 ************************************ 00:11:44.622 END TEST nvmf_referrals 00:11:44.622 ************************************ 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.881 ************************************ 00:11:44.881 START TEST nvmf_connect_disconnect 00:11:44.881 ************************************ 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.881 * Looking for test storage... 00:11:44.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:44.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.881 --rc genhtml_branch_coverage=1 00:11:44.881 --rc genhtml_function_coverage=1 00:11:44.881 --rc genhtml_legend=1 00:11:44.881 --rc geninfo_all_blocks=1 00:11:44.881 --rc geninfo_unexecuted_blocks=1 00:11:44.881 00:11:44.881 ' 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:44.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.881 --rc genhtml_branch_coverage=1 00:11:44.881 --rc genhtml_function_coverage=1 00:11:44.881 --rc genhtml_legend=1 00:11:44.881 --rc geninfo_all_blocks=1 00:11:44.881 --rc geninfo_unexecuted_blocks=1 00:11:44.881 00:11:44.881 ' 00:11:44.881 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:44.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.881 --rc genhtml_branch_coverage=1 00:11:44.881 --rc genhtml_function_coverage=1 00:11:44.881 --rc genhtml_legend=1 00:11:44.881 --rc geninfo_all_blocks=1 00:11:44.882 --rc geninfo_unexecuted_blocks=1 00:11:44.882 00:11:44.882 ' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:44.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.882 --rc genhtml_branch_coverage=1 00:11:44.882 --rc genhtml_function_coverage=1 00:11:44.882 --rc genhtml_legend=1 00:11:44.882 --rc geninfo_all_blocks=1 00:11:44.882 --rc geninfo_unexecuted_blocks=1 00:11:44.882 00:11:44.882 ' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.882 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.414 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:11:47.415 00:11:47.415 --- 10.0.0.2 ping statistics --- 00:11:47.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.415 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:11:47.415 00:11:47.415 --- 10.0.0.1 ping statistics --- 00:11:47.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.415 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3756112 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3756112 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3756112 ']' 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.415 [2024-11-02 11:23:47.485742] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:11:47.415 [2024-11-02 11:23:47.485831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.415 [2024-11-02 11:23:47.566531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.415 [2024-11-02 11:23:47.617550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.415 [2024-11-02 11:23:47.617618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.415 [2024-11-02 11:23:47.617644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.415 [2024-11-02 11:23:47.617665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.415 [2024-11-02 11:23:47.617682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.415 [2024-11-02 11:23:47.619403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.415 [2024-11-02 11:23:47.619462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.415 [2024-11-02 11:23:47.619519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.415 [2024-11-02 11:23:47.619522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.415 [2024-11-02 11:23:47.773404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.415 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 [2024-11-02 11:23:47.836461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:47.673 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:50.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:39.910 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:39.910 rmmod nvme_tcp 00:15:39.910 rmmod nvme_fabrics 00:15:39.910 rmmod nvme_keyring 00:15:40.168 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.168 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:40.168 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:40.168 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3756112 ']' 00:15:40.168 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3756112 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3756112 ']' 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3756112 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3756112 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3756112' 00:15:40.169 killing process with pid 3756112 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3756112 00:15:40.169 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3756112 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.428 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.329 00:15:42.329 real 3m57.591s 00:15:42.329 user 15m6.273s 00:15:42.329 sys 0m34.370s 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.329 ************************************ 00:15:42.329 END TEST nvmf_connect_disconnect 00:15:42.329 ************************************ 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.329 11:27:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.329 ************************************ 00:15:42.329 START TEST nvmf_multitarget 00:15:42.329 ************************************ 00:15:42.330 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:42.588 * Looking for test storage... 00:15:42.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:42.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.588 --rc genhtml_branch_coverage=1 00:15:42.588 --rc genhtml_function_coverage=1 00:15:42.588 --rc genhtml_legend=1 00:15:42.588 --rc geninfo_all_blocks=1 00:15:42.588 --rc geninfo_unexecuted_blocks=1 00:15:42.588 00:15:42.588 ' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:42.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.588 --rc genhtml_branch_coverage=1 00:15:42.588 --rc genhtml_function_coverage=1 00:15:42.588 --rc genhtml_legend=1 00:15:42.588 --rc geninfo_all_blocks=1 00:15:42.588 --rc geninfo_unexecuted_blocks=1 00:15:42.588 00:15:42.588 ' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:42.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.588 --rc genhtml_branch_coverage=1 00:15:42.588 --rc genhtml_function_coverage=1 00:15:42.588 --rc genhtml_legend=1 00:15:42.588 --rc geninfo_all_blocks=1 00:15:42.588 --rc geninfo_unexecuted_blocks=1 00:15:42.588 00:15:42.588 ' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:42.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.588 --rc genhtml_branch_coverage=1 00:15:42.588 --rc genhtml_function_coverage=1 00:15:42.588 --rc genhtml_legend=1 00:15:42.588 --rc geninfo_all_blocks=1 00:15:42.588 --rc geninfo_unexecuted_blocks=1 00:15:42.588 00:15:42.588 ' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.588 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:42.589 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:44.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:44.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:44.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:44.489 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:44.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.490 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:44.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:15:44.748 00:15:44.748 --- 10.0.0.2 ping statistics --- 00:15:44.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.748 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:15:44.748 00:15:44.748 --- 10.0.0.1 ping statistics --- 00:15:44.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.748 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3788008 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3788008 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3788008 ']' 00:15:44.748 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.749 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:44.749 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.749 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:44.749 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:45.007 [2024-11-02 11:27:45.178050] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:15:45.007 [2024-11-02 11:27:45.178146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.007 [2024-11-02 11:27:45.270244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.007 [2024-11-02 11:27:45.325540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.007 [2024-11-02 11:27:45.325614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.007 [2024-11-02 11:27:45.325631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.007 [2024-11-02 11:27:45.325645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.007 [2024-11-02 11:27:45.325657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.007 [2024-11-02 11:27:45.327411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.007 [2024-11-02 11:27:45.327441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.007 [2024-11-02 11:27:45.327498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.007 [2024-11-02 11:27:45.327502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:45.265 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:45.565 "nvmf_tgt_1" 00:15:45.565 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:45.565 "nvmf_tgt_2" 00:15:45.565 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:45.565 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:45.565 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:45.565 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:45.851 true 00:15:45.851 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:45.851 true 00:15:45.851 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:45.851 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:46.109 rmmod nvme_tcp 00:15:46.109 rmmod nvme_fabrics 00:15:46.109 rmmod nvme_keyring 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3788008 ']' 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3788008 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3788008 ']' 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3788008 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3788008 00:15:46.109 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.110 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.110 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3788008' 00:15:46.110 killing process with pid 3788008 00:15:46.110 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3788008 00:15:46.110 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3788008 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.369 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.271 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:48.271 00:15:48.271 real 0m5.941s 00:15:48.271 user 0m6.758s 00:15:48.271 sys 0m1.976s 00:15:48.271 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:48.271 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.271 ************************************ 00:15:48.271 END TEST nvmf_multitarget 00:15:48.271 ************************************ 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.530 ************************************ 00:15:48.530 START TEST nvmf_rpc 00:15:48.530 ************************************ 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:48.530 * Looking for test storage... 00:15:48.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.530 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.531 --rc genhtml_branch_coverage=1 00:15:48.531 --rc genhtml_function_coverage=1 00:15:48.531 --rc genhtml_legend=1 00:15:48.531 --rc geninfo_all_blocks=1 00:15:48.531 --rc geninfo_unexecuted_blocks=1 00:15:48.531 00:15:48.531 ' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.531 --rc genhtml_branch_coverage=1 00:15:48.531 --rc genhtml_function_coverage=1 00:15:48.531 --rc genhtml_legend=1 00:15:48.531 --rc geninfo_all_blocks=1 00:15:48.531 --rc geninfo_unexecuted_blocks=1 00:15:48.531 00:15:48.531 ' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.531 --rc genhtml_branch_coverage=1 00:15:48.531 --rc genhtml_function_coverage=1 00:15:48.531 --rc genhtml_legend=1 00:15:48.531 --rc geninfo_all_blocks=1 00:15:48.531 --rc geninfo_unexecuted_blocks=1 00:15:48.531 00:15:48.531 ' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.531 --rc genhtml_branch_coverage=1 00:15:48.531 --rc genhtml_function_coverage=1 00:15:48.531 --rc genhtml_legend=1 00:15:48.531 --rc geninfo_all_blocks=1 00:15:48.531 --rc geninfo_unexecuted_blocks=1 00:15:48.531 00:15:48.531 ' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.531 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:50.433 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:50.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:50.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:50.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:50.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.434 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:50.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:15:50.693 00:15:50.693 --- 10.0.0.2 ping statistics --- 00:15:50.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.693 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:15:50.693 00:15:50.693 --- 10.0.0.1 ping statistics --- 00:15:50.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.693 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3790123 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3790123 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3790123 ']' 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:50.693 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.693 [2024-11-02 11:27:50.998138] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:15:50.693 [2024-11-02 11:27:50.998241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.693 [2024-11-02 11:27:51.080372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.951 [2024-11-02 11:27:51.132286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.951 [2024-11-02 11:27:51.132352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.951 [2024-11-02 11:27:51.132380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.951 [2024-11-02 11:27:51.132395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.951 [2024-11-02 11:27:51.132408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.951 [2024-11-02 11:27:51.134101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.951 [2024-11-02 11:27:51.134163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.951 [2024-11-02 11:27:51.134216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.951 [2024-11-02 11:27:51.134219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.951 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:50.952 "tick_rate": 2700000000, 00:15:50.952 "poll_groups": [ 00:15:50.952 { 00:15:50.952 "name": "nvmf_tgt_poll_group_000", 00:15:50.952 "admin_qpairs": 0, 00:15:50.952 "io_qpairs": 0, 00:15:50.952 "current_admin_qpairs": 0, 00:15:50.952 "current_io_qpairs": 0, 00:15:50.952 "pending_bdev_io": 0, 00:15:50.952 "completed_nvme_io": 0, 00:15:50.952 "transports": [] 00:15:50.952 }, 00:15:50.952 { 00:15:50.952 "name": "nvmf_tgt_poll_group_001", 00:15:50.952 "admin_qpairs": 0, 00:15:50.952 "io_qpairs": 0, 00:15:50.952 "current_admin_qpairs": 0, 00:15:50.952 "current_io_qpairs": 0, 00:15:50.952 "pending_bdev_io": 0, 00:15:50.952 "completed_nvme_io": 0, 00:15:50.952 "transports": [] 00:15:50.952 }, 00:15:50.952 { 00:15:50.952 "name": "nvmf_tgt_poll_group_002", 00:15:50.952 "admin_qpairs": 0, 00:15:50.952 "io_qpairs": 0, 00:15:50.952 "current_admin_qpairs": 0, 00:15:50.952 "current_io_qpairs": 0, 00:15:50.952 "pending_bdev_io": 0, 00:15:50.952 "completed_nvme_io": 0, 00:15:50.952 "transports": [] 00:15:50.952 }, 00:15:50.952 { 00:15:50.952 "name": "nvmf_tgt_poll_group_003", 00:15:50.952 "admin_qpairs": 0, 00:15:50.952 "io_qpairs": 0, 00:15:50.952 "current_admin_qpairs": 0, 00:15:50.952 "current_io_qpairs": 0, 00:15:50.952 "pending_bdev_io": 0, 00:15:50.952 "completed_nvme_io": 0, 00:15:50.952 "transports": [] 00:15:50.952 } 00:15:50.952 ] 00:15:50.952 }' 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:50.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.210 [2024-11-02 11:27:51.373334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:51.210 "tick_rate": 2700000000, 00:15:51.210 "poll_groups": [ 00:15:51.210 { 00:15:51.210 "name": "nvmf_tgt_poll_group_000", 00:15:51.210 "admin_qpairs": 0, 00:15:51.210 "io_qpairs": 0, 00:15:51.210 "current_admin_qpairs": 0, 00:15:51.210 "current_io_qpairs": 0, 00:15:51.210 "pending_bdev_io": 0, 00:15:51.210 "completed_nvme_io": 0, 00:15:51.210 "transports": [ 00:15:51.210 { 00:15:51.210 "trtype": "TCP" 00:15:51.210 } 00:15:51.210 ] 00:15:51.210 }, 00:15:51.210 { 00:15:51.210 "name": "nvmf_tgt_poll_group_001", 00:15:51.210 "admin_qpairs": 0, 00:15:51.210 "io_qpairs": 0, 00:15:51.210 "current_admin_qpairs": 0, 00:15:51.210 "current_io_qpairs": 0, 00:15:51.210 "pending_bdev_io": 0, 00:15:51.210 "completed_nvme_io": 0, 00:15:51.210 "transports": [ 00:15:51.210 { 00:15:51.210 "trtype": "TCP" 00:15:51.210 } 00:15:51.210 ] 00:15:51.210 }, 00:15:51.210 { 00:15:51.210 "name": "nvmf_tgt_poll_group_002", 00:15:51.210 "admin_qpairs": 0, 00:15:51.210 "io_qpairs": 0, 00:15:51.210 "current_admin_qpairs": 0, 00:15:51.210 "current_io_qpairs": 0, 00:15:51.210 "pending_bdev_io": 0, 00:15:51.210 "completed_nvme_io": 0, 00:15:51.210 "transports": [ 00:15:51.210 { 00:15:51.210 "trtype": "TCP" 00:15:51.210 } 00:15:51.210 ] 00:15:51.210 }, 00:15:51.210 { 00:15:51.210 "name": "nvmf_tgt_poll_group_003", 00:15:51.210 "admin_qpairs": 0, 00:15:51.210 "io_qpairs": 0, 00:15:51.210 "current_admin_qpairs": 0, 00:15:51.210 "current_io_qpairs": 0, 00:15:51.210 "pending_bdev_io": 0, 00:15:51.210 "completed_nvme_io": 0, 00:15:51.210 "transports": [ 00:15:51.210 { 00:15:51.210 "trtype": "TCP" 00:15:51.210 } 00:15:51.210 ] 00:15:51.210 } 00:15:51.210 ] 00:15:51.210 }' 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:51.210 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 Malloc1 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 [2024-11-02 11:27:51.543320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:51.211 [2024-11-02 11:27:51.565950] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:51.211 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:51.211 could not add new controller: failed to write to nvme-fabrics device 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.211 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.144 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.144 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:52.144 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.144 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:52.144 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.041 [2024-11-02 11:27:54.399786] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:54.041 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:54.041 could not add new controller: failed to write to nvme-fabrics device 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.041 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.042 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.974 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.974 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:54.974 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.974 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:54.974 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:56.872 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.873 [2024-11-02 11:27:57.240894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.873 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.806 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.806 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:57.806 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.806 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:57.806 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:59.703 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 [2024-11-02 11:28:00.084116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.703 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.633 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:00.633 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:00.633 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.633 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:00.633 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.531 [2024-11-02 11:28:02.861842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.531 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.097 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.097 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:03.097 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.097 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:03.097 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 [2024-11-02 11:28:05.614580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.624 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.625 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.625 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.625 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.190 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:06.190 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:06.190 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.190 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:06.190 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 [2024-11-02 11:28:08.443866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.088 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.021 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.021 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:16:09.021 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.021 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:09.021 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.920 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.921 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.188 [2024-11-02 11:28:11.328825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.188 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.189 [2024-11-02 11:28:11.376870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.189 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 [2024-11-02 11:28:11.425019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.190 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 [2024-11-02 11:28:11.473170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 [2024-11-02 11:28:11.521356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.192 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.193 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.193 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:11.193 "tick_rate": 2700000000, 00:16:11.193 "poll_groups": [ 00:16:11.193 { 00:16:11.193 "name": "nvmf_tgt_poll_group_000", 00:16:11.193 "admin_qpairs": 2, 00:16:11.193 "io_qpairs": 84, 00:16:11.193 "current_admin_qpairs": 0, 00:16:11.193 "current_io_qpairs": 0, 00:16:11.193 "pending_bdev_io": 0, 00:16:11.193 "completed_nvme_io": 135, 00:16:11.193 "transports": [ 00:16:11.193 { 00:16:11.193 "trtype": "TCP" 00:16:11.193 } 00:16:11.193 ] 00:16:11.193 }, 00:16:11.193 { 00:16:11.193 "name": "nvmf_tgt_poll_group_001", 00:16:11.193 "admin_qpairs": 2, 00:16:11.193 "io_qpairs": 84, 00:16:11.193 "current_admin_qpairs": 0, 00:16:11.193 "current_io_qpairs": 0, 00:16:11.193 "pending_bdev_io": 0, 00:16:11.193 "completed_nvme_io": 175, 00:16:11.193 "transports": [ 00:16:11.193 { 00:16:11.193 "trtype": "TCP" 00:16:11.193 } 00:16:11.193 ] 00:16:11.193 }, 00:16:11.193 { 00:16:11.193 "name": "nvmf_tgt_poll_group_002", 00:16:11.193 "admin_qpairs": 1, 00:16:11.193 "io_qpairs": 84, 00:16:11.193 "current_admin_qpairs": 0, 00:16:11.193 "current_io_qpairs": 0, 00:16:11.193 "pending_bdev_io": 0, 00:16:11.193 "completed_nvme_io": 185, 00:16:11.193 "transports": [ 00:16:11.193 { 00:16:11.193 "trtype": "TCP" 00:16:11.193 } 00:16:11.193 ] 00:16:11.193 }, 00:16:11.193 { 00:16:11.193 "name": "nvmf_tgt_poll_group_003", 00:16:11.193 "admin_qpairs": 2, 00:16:11.193 "io_qpairs": 84, 00:16:11.193 "current_admin_qpairs": 0, 00:16:11.193 "current_io_qpairs": 0, 00:16:11.193 "pending_bdev_io": 0, 00:16:11.193 "completed_nvme_io": 191, 00:16:11.193 "transports": [ 00:16:11.193 { 00:16:11.193 "trtype": "TCP" 00:16:11.193 } 00:16:11.193 ] 00:16:11.193 } 00:16:11.193 ] 00:16:11.193 }' 00:16:11.193 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:11.193 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:11.193 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:11.193 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.453 rmmod nvme_tcp 00:16:11.453 rmmod nvme_fabrics 00:16:11.453 rmmod nvme_keyring 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3790123 ']' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3790123 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3790123 ']' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3790123 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3790123 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3790123' 00:16:11.453 killing process with pid 3790123 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3790123 00:16:11.453 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3790123 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.711 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.711 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.711 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.711 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.711 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.711 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:14.243 00:16:14.243 real 0m25.351s 00:16:14.243 user 1m22.758s 00:16:14.243 sys 0m4.072s 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.243 ************************************ 00:16:14.243 END TEST nvmf_rpc 00:16:14.243 ************************************ 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.243 ************************************ 00:16:14.243 START TEST nvmf_invalid 00:16:14.243 ************************************ 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:14.243 * Looking for test storage... 00:16:14.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.243 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.244 --rc genhtml_branch_coverage=1 00:16:14.244 --rc genhtml_function_coverage=1 00:16:14.244 --rc genhtml_legend=1 00:16:14.244 --rc geninfo_all_blocks=1 00:16:14.244 --rc geninfo_unexecuted_blocks=1 00:16:14.244 00:16:14.244 ' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.244 --rc genhtml_branch_coverage=1 00:16:14.244 --rc genhtml_function_coverage=1 00:16:14.244 --rc genhtml_legend=1 00:16:14.244 --rc geninfo_all_blocks=1 00:16:14.244 --rc geninfo_unexecuted_blocks=1 00:16:14.244 00:16:14.244 ' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.244 --rc genhtml_branch_coverage=1 00:16:14.244 --rc genhtml_function_coverage=1 00:16:14.244 --rc genhtml_legend=1 00:16:14.244 --rc geninfo_all_blocks=1 00:16:14.244 --rc geninfo_unexecuted_blocks=1 00:16:14.244 00:16:14.244 ' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:14.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.244 --rc genhtml_branch_coverage=1 00:16:14.244 --rc genhtml_function_coverage=1 00:16:14.244 --rc genhtml_legend=1 00:16:14.244 --rc geninfo_all_blocks=1 00:16:14.244 --rc geninfo_unexecuted_blocks=1 00:16:14.244 00:16:14.244 ' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:14.244 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.146 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.147 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:16:16.147 00:16:16.147 --- 10.0.0.2 ping statistics --- 00:16:16.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.147 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:16:16.147 00:16:16.147 --- 10.0.0.1 ping statistics --- 00:16:16.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.147 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3794624 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3794624 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3794624 ']' 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.147 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.147 [2024-11-02 11:28:16.481710] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:16:16.147 [2024-11-02 11:28:16.481799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.405 [2024-11-02 11:28:16.566445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.405 [2024-11-02 11:28:16.622619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.405 [2024-11-02 11:28:16.622690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.405 [2024-11-02 11:28:16.622706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.405 [2024-11-02 11:28:16.622719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.405 [2024-11-02 11:28:16.622731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.405 [2024-11-02 11:28:16.624474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.405 [2024-11-02 11:28:16.624530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.405 [2024-11-02 11:28:16.627280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.405 [2024-11-02 11:28:16.627293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:16.405 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21329 00:16:16.663 [2024-11-02 11:28:17.042632] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:16.663 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:16.663 { 00:16:16.663 "nqn": "nqn.2016-06.io.spdk:cnode21329", 00:16:16.663 "tgt_name": "foobar", 00:16:16.663 "method": "nvmf_create_subsystem", 00:16:16.663 "req_id": 1 00:16:16.663 } 00:16:16.663 Got JSON-RPC error response 00:16:16.663 response: 00:16:16.663 { 00:16:16.663 "code": -32603, 00:16:16.663 "message": "Unable to find target foobar" 00:16:16.663 }' 00:16:16.663 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:16.663 { 00:16:16.663 "nqn": "nqn.2016-06.io.spdk:cnode21329", 00:16:16.663 "tgt_name": "foobar", 00:16:16.663 "method": "nvmf_create_subsystem", 00:16:16.663 "req_id": 1 00:16:16.663 } 00:16:16.663 Got JSON-RPC error response 00:16:16.663 response: 00:16:16.663 { 00:16:16.663 "code": -32603, 00:16:16.663 "message": "Unable to find target foobar" 00:16:16.663 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:16.921 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:16.921 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6277 00:16:16.921 [2024-11-02 11:28:17.311523] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6277: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:17.179 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:17.179 { 00:16:17.179 "nqn": "nqn.2016-06.io.spdk:cnode6277", 00:16:17.179 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:17.179 "method": "nvmf_create_subsystem", 00:16:17.179 "req_id": 1 00:16:17.179 } 00:16:17.179 Got JSON-RPC error response 00:16:17.179 response: 00:16:17.179 { 00:16:17.179 "code": -32602, 00:16:17.179 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:17.179 }' 00:16:17.179 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:17.179 { 00:16:17.179 "nqn": "nqn.2016-06.io.spdk:cnode6277", 00:16:17.179 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:17.179 "method": "nvmf_create_subsystem", 00:16:17.179 "req_id": 1 00:16:17.179 } 00:16:17.179 Got JSON-RPC error response 00:16:17.179 response: 00:16:17.179 { 00:16:17.179 "code": -32602, 00:16:17.179 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:17.179 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:17.179 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:17.179 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15600 00:16:17.179 [2024-11-02 11:28:17.576405] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15600: invalid model number 'SPDK_Controller' 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:17.437 { 00:16:17.437 "nqn": "nqn.2016-06.io.spdk:cnode15600", 00:16:17.437 "model_number": "SPDK_Controller\u001f", 00:16:17.437 "method": "nvmf_create_subsystem", 00:16:17.437 "req_id": 1 00:16:17.437 } 00:16:17.437 Got JSON-RPC error response 00:16:17.437 response: 00:16:17.437 { 00:16:17.437 "code": -32602, 00:16:17.437 "message": "Invalid MN SPDK_Controller\u001f" 00:16:17.437 }' 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:17.437 { 00:16:17.437 "nqn": "nqn.2016-06.io.spdk:cnode15600", 00:16:17.437 "model_number": "SPDK_Controller\u001f", 00:16:17.437 "method": "nvmf_create_subsystem", 00:16:17.437 "req_id": 1 00:16:17.437 } 00:16:17.437 Got JSON-RPC error response 00:16:17.437 response: 00:16:17.437 { 00:16:17.437 "code": -32602, 00:16:17.437 "message": "Invalid MN SPDK_Controller\u001f" 00:16:17.437 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:17.437 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!.,@%B--R<&femx~z7xo_' 00:16:17.438 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '!.,@%B--R<&femx~z7xo_' nqn.2016-06.io.spdk:cnode7645 00:16:17.697 [2024-11-02 11:28:17.925612] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7645: invalid serial number '!.,@%B--R<&femx~z7xo_' 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:17.697 { 00:16:17.697 "nqn": "nqn.2016-06.io.spdk:cnode7645", 00:16:17.697 "serial_number": "!.,@%B--R<&femx~z7xo_", 00:16:17.697 "method": "nvmf_create_subsystem", 00:16:17.697 "req_id": 1 00:16:17.697 } 00:16:17.697 Got JSON-RPC error response 00:16:17.697 response: 00:16:17.697 { 00:16:17.697 "code": -32602, 00:16:17.697 "message": "Invalid SN !.,@%B--R<&femx~z7xo_" 00:16:17.697 }' 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:17.697 { 00:16:17.697 "nqn": "nqn.2016-06.io.spdk:cnode7645", 00:16:17.697 "serial_number": "!.,@%B--R<&femx~z7xo_", 00:16:17.697 "method": "nvmf_create_subsystem", 00:16:17.697 "req_id": 1 00:16:17.697 } 00:16:17.697 Got JSON-RPC error response 00:16:17.697 response: 00:16:17.697 { 00:16:17.697 "code": -32602, 00:16:17.697 "message": "Invalid SN !.,@%B--R<&femx~z7xo_" 00:16:17.697 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.697 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:17.698 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:17.698 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha' 00:16:17.699 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '+97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha' nqn.2016-06.io.spdk:cnode24589 00:16:17.957 [2024-11-02 11:28:18.330914] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24589: invalid model number '+97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha' 00:16:17.957 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:17.957 { 00:16:17.957 "nqn": "nqn.2016-06.io.spdk:cnode24589", 00:16:17.957 "model_number": "+97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha", 00:16:17.957 "method": "nvmf_create_subsystem", 00:16:17.957 "req_id": 1 00:16:17.957 } 00:16:17.957 Got JSON-RPC error response 00:16:17.957 response: 00:16:17.957 { 00:16:17.957 "code": -32602, 00:16:17.957 "message": "Invalid MN +97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha" 00:16:17.957 }' 00:16:17.957 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:17.957 { 00:16:17.957 "nqn": "nqn.2016-06.io.spdk:cnode24589", 00:16:17.957 "model_number": "+97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha", 00:16:17.957 "method": "nvmf_create_subsystem", 00:16:17.957 "req_id": 1 00:16:17.957 } 00:16:17.957 Got JSON-RPC error response 00:16:17.957 response: 00:16:17.957 { 00:16:17.957 "code": -32602, 00:16:17.957 "message": "Invalid MN +97coIj!uzpVm7KWC_r#QHW&Pgcx7K|60-}B;=Nha" 00:16:17.957 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:17.957 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:18.215 [2024-11-02 11:28:18.603922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.472 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:18.730 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:18.730 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:18.730 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:18.730 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:18.730 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:18.988 [2024-11-02 11:28:19.169841] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:18.988 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:18.988 { 00:16:18.988 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:18.988 "listen_address": { 00:16:18.988 "trtype": "tcp", 00:16:18.988 "traddr": "", 00:16:18.988 "trsvcid": "4421" 00:16:18.988 }, 00:16:18.988 "method": "nvmf_subsystem_remove_listener", 00:16:18.988 "req_id": 1 00:16:18.988 } 00:16:18.988 Got JSON-RPC error response 00:16:18.988 response: 00:16:18.988 { 00:16:18.988 "code": -32602, 00:16:18.988 "message": "Invalid parameters" 00:16:18.988 }' 00:16:18.988 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:18.988 { 00:16:18.988 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:18.988 "listen_address": { 00:16:18.988 "trtype": "tcp", 00:16:18.988 "traddr": "", 00:16:18.988 "trsvcid": "4421" 00:16:18.988 }, 00:16:18.988 "method": "nvmf_subsystem_remove_listener", 00:16:18.988 "req_id": 1 00:16:18.988 } 00:16:18.988 Got JSON-RPC error response 00:16:18.988 response: 00:16:18.988 { 00:16:18.988 "code": -32602, 00:16:18.988 "message": "Invalid parameters" 00:16:18.988 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:18.988 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30470 -i 0 00:16:19.245 [2024-11-02 11:28:19.434688] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30470: invalid cntlid range [0-65519] 00:16:19.245 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:19.245 { 00:16:19.245 "nqn": "nqn.2016-06.io.spdk:cnode30470", 00:16:19.245 "min_cntlid": 0, 00:16:19.245 "method": "nvmf_create_subsystem", 00:16:19.245 "req_id": 1 00:16:19.245 } 00:16:19.245 Got JSON-RPC error response 00:16:19.245 response: 00:16:19.245 { 00:16:19.245 "code": -32602, 00:16:19.245 "message": "Invalid cntlid range [0-65519]" 00:16:19.245 }' 00:16:19.245 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:19.245 { 00:16:19.245 "nqn": "nqn.2016-06.io.spdk:cnode30470", 00:16:19.245 "min_cntlid": 0, 00:16:19.245 "method": "nvmf_create_subsystem", 00:16:19.245 "req_id": 1 00:16:19.245 } 00:16:19.245 Got JSON-RPC error response 00:16:19.245 response: 00:16:19.245 { 00:16:19.245 "code": -32602, 00:16:19.245 "message": "Invalid cntlid range [0-65519]" 00:16:19.245 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:19.245 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22040 -i 65520 00:16:19.503 [2024-11-02 11:28:19.695578] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22040: invalid cntlid range [65520-65519] 00:16:19.503 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:19.503 { 00:16:19.503 "nqn": "nqn.2016-06.io.spdk:cnode22040", 00:16:19.503 "min_cntlid": 65520, 00:16:19.503 "method": "nvmf_create_subsystem", 00:16:19.503 "req_id": 1 00:16:19.503 } 00:16:19.503 Got JSON-RPC error response 00:16:19.503 response: 00:16:19.503 { 00:16:19.503 "code": -32602, 00:16:19.503 "message": "Invalid cntlid range [65520-65519]" 00:16:19.503 }' 00:16:19.503 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:19.503 { 00:16:19.503 "nqn": "nqn.2016-06.io.spdk:cnode22040", 00:16:19.503 "min_cntlid": 65520, 00:16:19.503 "method": "nvmf_create_subsystem", 00:16:19.503 "req_id": 1 00:16:19.503 } 00:16:19.503 Got JSON-RPC error response 00:16:19.503 response: 00:16:19.503 { 00:16:19.503 "code": -32602, 00:16:19.503 "message": "Invalid cntlid range [65520-65519]" 00:16:19.503 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:19.503 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10664 -I 0 00:16:19.761 [2024-11-02 11:28:19.972522] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10664: invalid cntlid range [1-0] 00:16:19.761 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:19.761 { 00:16:19.761 "nqn": "nqn.2016-06.io.spdk:cnode10664", 00:16:19.761 "max_cntlid": 0, 00:16:19.761 "method": "nvmf_create_subsystem", 00:16:19.761 "req_id": 1 00:16:19.761 } 00:16:19.761 Got JSON-RPC error response 00:16:19.761 response: 00:16:19.761 { 00:16:19.761 "code": -32602, 00:16:19.761 "message": "Invalid cntlid range [1-0]" 00:16:19.761 }' 00:16:19.761 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:19.761 { 00:16:19.761 "nqn": "nqn.2016-06.io.spdk:cnode10664", 00:16:19.761 "max_cntlid": 0, 00:16:19.761 "method": "nvmf_create_subsystem", 00:16:19.761 "req_id": 1 00:16:19.761 } 00:16:19.761 Got JSON-RPC error response 00:16:19.761 response: 00:16:19.761 { 00:16:19.761 "code": -32602, 00:16:19.761 "message": "Invalid cntlid range [1-0]" 00:16:19.761 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:19.761 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4228 -I 65520 00:16:20.021 [2024-11-02 11:28:20.269589] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4228: invalid cntlid range [1-65520] 00:16:20.021 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:20.021 { 00:16:20.021 "nqn": "nqn.2016-06.io.spdk:cnode4228", 00:16:20.021 "max_cntlid": 65520, 00:16:20.021 "method": "nvmf_create_subsystem", 00:16:20.021 "req_id": 1 00:16:20.021 } 00:16:20.021 Got JSON-RPC error response 00:16:20.021 response: 00:16:20.021 { 00:16:20.021 "code": -32602, 00:16:20.021 "message": "Invalid cntlid range [1-65520]" 00:16:20.021 }' 00:16:20.021 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:20.021 { 00:16:20.021 "nqn": "nqn.2016-06.io.spdk:cnode4228", 00:16:20.021 "max_cntlid": 65520, 00:16:20.021 "method": "nvmf_create_subsystem", 00:16:20.021 "req_id": 1 00:16:20.021 } 00:16:20.021 Got JSON-RPC error response 00:16:20.021 response: 00:16:20.021 { 00:16:20.021 "code": -32602, 00:16:20.021 "message": "Invalid cntlid range [1-65520]" 00:16:20.021 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:20.021 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23335 -i 6 -I 5 00:16:20.278 [2024-11-02 11:28:20.542500] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23335: invalid cntlid range [6-5] 00:16:20.278 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:20.278 { 00:16:20.278 "nqn": "nqn.2016-06.io.spdk:cnode23335", 00:16:20.279 "min_cntlid": 6, 00:16:20.279 "max_cntlid": 5, 00:16:20.279 "method": "nvmf_create_subsystem", 00:16:20.279 "req_id": 1 00:16:20.279 } 00:16:20.279 Got JSON-RPC error response 00:16:20.279 response: 00:16:20.279 { 00:16:20.279 "code": -32602, 00:16:20.279 "message": "Invalid cntlid range [6-5]" 00:16:20.279 }' 00:16:20.279 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:20.279 { 00:16:20.279 "nqn": "nqn.2016-06.io.spdk:cnode23335", 00:16:20.279 "min_cntlid": 6, 00:16:20.279 "max_cntlid": 5, 00:16:20.279 "method": "nvmf_create_subsystem", 00:16:20.279 "req_id": 1 00:16:20.279 } 00:16:20.279 Got JSON-RPC error response 00:16:20.279 response: 00:16:20.279 { 00:16:20.279 "code": -32602, 00:16:20.279 "message": "Invalid cntlid range [6-5]" 00:16:20.279 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:20.279 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:20.537 { 00:16:20.537 "name": "foobar", 00:16:20.537 "method": "nvmf_delete_target", 00:16:20.537 "req_id": 1 00:16:20.537 } 00:16:20.537 Got JSON-RPC error response 00:16:20.537 response: 00:16:20.537 { 00:16:20.537 "code": -32602, 00:16:20.537 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:20.537 }' 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:20.537 { 00:16:20.537 "name": "foobar", 00:16:20.537 "method": "nvmf_delete_target", 00:16:20.537 "req_id": 1 00:16:20.537 } 00:16:20.537 Got JSON-RPC error response 00:16:20.537 response: 00:16:20.537 { 00:16:20.537 "code": -32602, 00:16:20.537 "message": "The specified target doesn't exist, cannot delete it." 00:16:20.537 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.537 rmmod nvme_tcp 00:16:20.537 rmmod nvme_fabrics 00:16:20.537 rmmod nvme_keyring 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3794624 ']' 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3794624 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3794624 ']' 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3794624 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3794624 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3794624' 00:16:20.537 killing process with pid 3794624 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3794624 00:16:20.537 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3794624 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.797 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:22.754 00:16:22.754 real 0m8.969s 00:16:22.754 user 0m21.568s 00:16:22.754 sys 0m2.496s 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:22.754 ************************************ 00:16:22.754 END TEST nvmf_invalid 00:16:22.754 ************************************ 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.754 ************************************ 00:16:22.754 START TEST nvmf_connect_stress 00:16:22.754 ************************************ 00:16:22.754 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:23.013 * Looking for test storage... 00:16:23.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.013 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:23.014 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:24.916 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:25.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:25.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:25.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:25.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:25.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:16:25.176 00:16:25.176 --- 10.0.0.2 ping statistics --- 00:16:25.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.176 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:16:25.176 00:16:25.176 --- 10.0.0.1 ping statistics --- 00:16:25.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.176 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.176 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3797266 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3797266 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3797266 ']' 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.177 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.177 [2024-11-02 11:28:25.550844] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:16:25.177 [2024-11-02 11:28:25.550928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.435 [2024-11-02 11:28:25.630198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.435 [2024-11-02 11:28:25.676694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.435 [2024-11-02 11:28:25.676760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.435 [2024-11-02 11:28:25.676775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.435 [2024-11-02 11:28:25.676786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.435 [2024-11-02 11:28:25.676796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.435 [2024-11-02 11:28:25.678343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.435 [2024-11-02 11:28:25.678401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.435 [2024-11-02 11:28:25.678406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.435 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:25.435 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:16:25.435 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.435 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.436 [2024-11-02 11:28:25.823772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.436 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.694 [2024-11-02 11:28:25.840971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.694 NULL1 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3797340 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.694 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.952 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.952 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:25.952 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.952 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.952 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.210 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.210 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:26.210 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.210 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.210 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.468 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.468 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:26.468 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.468 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.468 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:27.033 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.033 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:27.033 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.033 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.033 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:27.291 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.291 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:27.291 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.291 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.291 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:27.549 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.549 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:27.549 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.549 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.549 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:27.806 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.806 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:27.806 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:27.806 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.806 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.371 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.372 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:28.372 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.372 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.372 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.629 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.629 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:28.629 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.629 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.629 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.887 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.887 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:28.887 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.887 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.887 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.144 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.144 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:29.144 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.144 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.144 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.402 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.402 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:29.402 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.402 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.402 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.967 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.967 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:29.967 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.967 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.967 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:30.225 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.225 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:30.225 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.225 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.225 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:30.482 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.482 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:30.482 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.482 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.482 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.740 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:30.740 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.740 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.740 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:30.998 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.998 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:30.998 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.998 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.998 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.563 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:31.563 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.563 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.563 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.820 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.820 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:31.820 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.820 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.820 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:32.078 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.078 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:32.078 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.078 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.078 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:32.336 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.336 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:32.336 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.336 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.336 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:32.594 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.594 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:32.594 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.594 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.594 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.159 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.159 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:33.159 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.159 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.159 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.416 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.416 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:33.416 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.416 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.416 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.674 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.674 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:33.674 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.674 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.674 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.933 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.933 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:33.933 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.933 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.933 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.191 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.191 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:34.191 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.191 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.191 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.756 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.756 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:34.756 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.756 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.756 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.014 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.014 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:35.014 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.014 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.014 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.271 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.271 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:35.271 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.271 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.271 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.529 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.529 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:35.529 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.529 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.529 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.786 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3797340 00:16:35.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3797340) - No such process 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3797340 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:35.786 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:35.787 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.045 rmmod nvme_tcp 00:16:36.045 rmmod nvme_fabrics 00:16:36.045 rmmod nvme_keyring 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3797266 ']' 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3797266 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3797266 ']' 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3797266 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3797266 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3797266' 00:16:36.045 killing process with pid 3797266 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3797266 00:16:36.045 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3797266 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.303 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:38.207 00:16:38.207 real 0m15.444s 00:16:38.207 user 0m38.636s 00:16:38.207 sys 0m5.854s 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.207 ************************************ 00:16:38.207 END TEST nvmf_connect_stress 00:16:38.207 ************************************ 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.207 ************************************ 00:16:38.207 START TEST nvmf_fused_ordering 00:16:38.207 ************************************ 00:16:38.207 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:38.465 * Looking for test storage... 00:16:38.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.465 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.466 --rc genhtml_branch_coverage=1 00:16:38.466 --rc genhtml_function_coverage=1 00:16:38.466 --rc genhtml_legend=1 00:16:38.466 --rc geninfo_all_blocks=1 00:16:38.466 --rc geninfo_unexecuted_blocks=1 00:16:38.466 00:16:38.466 ' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.466 --rc genhtml_branch_coverage=1 00:16:38.466 --rc genhtml_function_coverage=1 00:16:38.466 --rc genhtml_legend=1 00:16:38.466 --rc geninfo_all_blocks=1 00:16:38.466 --rc geninfo_unexecuted_blocks=1 00:16:38.466 00:16:38.466 ' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.466 --rc genhtml_branch_coverage=1 00:16:38.466 --rc genhtml_function_coverage=1 00:16:38.466 --rc genhtml_legend=1 00:16:38.466 --rc geninfo_all_blocks=1 00:16:38.466 --rc geninfo_unexecuted_blocks=1 00:16:38.466 00:16:38.466 ' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:38.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.466 --rc genhtml_branch_coverage=1 00:16:38.466 --rc genhtml_function_coverage=1 00:16:38.466 --rc genhtml_legend=1 00:16:38.466 --rc geninfo_all_blocks=1 00:16:38.466 --rc geninfo_unexecuted_blocks=1 00:16:38.466 00:16:38.466 ' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.466 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.997 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.997 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:40.998 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:40.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:16:40.998 00:16:40.998 --- 10.0.0.2 ping statistics --- 00:16:40.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.998 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:16:40.998 00:16:40.998 --- 10.0.0.1 ping statistics --- 00:16:40.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.998 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3800569 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3800569 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3800569 ']' 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.998 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:40.998 [2024-11-02 11:28:41.156131] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:16:40.998 [2024-11-02 11:28:41.156231] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.998 [2024-11-02 11:28:41.236685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.998 [2024-11-02 11:28:41.285644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.998 [2024-11-02 11:28:41.285721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.998 [2024-11-02 11:28:41.285748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.998 [2024-11-02 11:28:41.285761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.998 [2024-11-02 11:28:41.285772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.998 [2024-11-02 11:28:41.286440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 [2024-11-02 11:28:41.428771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 [2024-11-02 11:28:41.444986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 NULL1 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.257 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:41.257 [2024-11-02 11:28:41.490575] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:16:41.257 [2024-11-02 11:28:41.490617] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800592 ] 00:16:41.832 Attached to nqn.2016-06.io.spdk:cnode1 00:16:41.832 Namespace ID: 1 size: 1GB 00:16:41.832 fused_ordering(0) 00:16:41.832 fused_ordering(1) 00:16:41.832 fused_ordering(2) 00:16:41.832 fused_ordering(3) 00:16:41.832 fused_ordering(4) 00:16:41.832 fused_ordering(5) 00:16:41.832 fused_ordering(6) 00:16:41.832 fused_ordering(7) 00:16:41.832 fused_ordering(8) 00:16:41.832 fused_ordering(9) 00:16:41.832 fused_ordering(10) 00:16:41.832 fused_ordering(11) 00:16:41.832 fused_ordering(12) 00:16:41.832 fused_ordering(13) 00:16:41.832 fused_ordering(14) 00:16:41.832 fused_ordering(15) 00:16:41.832 fused_ordering(16) 00:16:41.832 fused_ordering(17) 00:16:41.832 fused_ordering(18) 00:16:41.832 fused_ordering(19) 00:16:41.832 fused_ordering(20) 00:16:41.832 fused_ordering(21) 00:16:41.832 fused_ordering(22) 00:16:41.832 fused_ordering(23) 00:16:41.832 fused_ordering(24) 00:16:41.832 fused_ordering(25) 00:16:41.832 fused_ordering(26) 00:16:41.832 fused_ordering(27) 00:16:41.832 fused_ordering(28) 00:16:41.832 fused_ordering(29) 00:16:41.832 fused_ordering(30) 00:16:41.832 fused_ordering(31) 00:16:41.832 fused_ordering(32) 00:16:41.832 fused_ordering(33) 00:16:41.832 fused_ordering(34) 00:16:41.832 fused_ordering(35) 00:16:41.832 fused_ordering(36) 00:16:41.832 fused_ordering(37) 00:16:41.832 fused_ordering(38) 00:16:41.832 fused_ordering(39) 00:16:41.832 fused_ordering(40) 00:16:41.832 fused_ordering(41) 00:16:41.832 fused_ordering(42) 00:16:41.832 fused_ordering(43) 00:16:41.832 fused_ordering(44) 00:16:41.832 fused_ordering(45) 00:16:41.832 fused_ordering(46) 00:16:41.832 fused_ordering(47) 00:16:41.832 fused_ordering(48) 00:16:41.832 fused_ordering(49) 00:16:41.832 fused_ordering(50) 00:16:41.832 fused_ordering(51) 00:16:41.832 fused_ordering(52) 00:16:41.832 fused_ordering(53) 00:16:41.832 fused_ordering(54) 00:16:41.832 fused_ordering(55) 00:16:41.832 fused_ordering(56) 00:16:41.832 fused_ordering(57) 00:16:41.832 fused_ordering(58) 00:16:41.832 fused_ordering(59) 00:16:41.832 fused_ordering(60) 00:16:41.832 fused_ordering(61) 00:16:41.832 fused_ordering(62) 00:16:41.832 fused_ordering(63) 00:16:41.832 fused_ordering(64) 00:16:41.832 fused_ordering(65) 00:16:41.832 fused_ordering(66) 00:16:41.832 fused_ordering(67) 00:16:41.832 fused_ordering(68) 00:16:41.832 fused_ordering(69) 00:16:41.832 fused_ordering(70) 00:16:41.832 fused_ordering(71) 00:16:41.832 fused_ordering(72) 00:16:41.832 fused_ordering(73) 00:16:41.832 fused_ordering(74) 00:16:41.832 fused_ordering(75) 00:16:41.832 fused_ordering(76) 00:16:41.832 fused_ordering(77) 00:16:41.832 fused_ordering(78) 00:16:41.832 fused_ordering(79) 00:16:41.832 fused_ordering(80) 00:16:41.832 fused_ordering(81) 00:16:41.832 fused_ordering(82) 00:16:41.832 fused_ordering(83) 00:16:41.832 fused_ordering(84) 00:16:41.832 fused_ordering(85) 00:16:41.832 fused_ordering(86) 00:16:41.832 fused_ordering(87) 00:16:41.832 fused_ordering(88) 00:16:41.832 fused_ordering(89) 00:16:41.832 fused_ordering(90) 00:16:41.832 fused_ordering(91) 00:16:41.832 fused_ordering(92) 00:16:41.832 fused_ordering(93) 00:16:41.832 fused_ordering(94) 00:16:41.832 fused_ordering(95) 00:16:41.832 fused_ordering(96) 00:16:41.832 fused_ordering(97) 00:16:41.832 fused_ordering(98) 00:16:41.832 fused_ordering(99) 00:16:41.832 fused_ordering(100) 00:16:41.832 fused_ordering(101) 00:16:41.832 fused_ordering(102) 00:16:41.832 fused_ordering(103) 00:16:41.832 fused_ordering(104) 00:16:41.832 fused_ordering(105) 00:16:41.832 fused_ordering(106) 00:16:41.832 fused_ordering(107) 00:16:41.832 fused_ordering(108) 00:16:41.832 fused_ordering(109) 00:16:41.832 fused_ordering(110) 00:16:41.832 fused_ordering(111) 00:16:41.832 fused_ordering(112) 00:16:41.832 fused_ordering(113) 00:16:41.832 fused_ordering(114) 00:16:41.832 fused_ordering(115) 00:16:41.832 fused_ordering(116) 00:16:41.832 fused_ordering(117) 00:16:41.832 fused_ordering(118) 00:16:41.832 fused_ordering(119) 00:16:41.832 fused_ordering(120) 00:16:41.832 fused_ordering(121) 00:16:41.832 fused_ordering(122) 00:16:41.832 fused_ordering(123) 00:16:41.832 fused_ordering(124) 00:16:41.832 fused_ordering(125) 00:16:41.832 fused_ordering(126) 00:16:41.832 fused_ordering(127) 00:16:41.832 fused_ordering(128) 00:16:41.832 fused_ordering(129) 00:16:41.832 fused_ordering(130) 00:16:41.832 fused_ordering(131) 00:16:41.832 fused_ordering(132) 00:16:41.832 fused_ordering(133) 00:16:41.832 fused_ordering(134) 00:16:41.832 fused_ordering(135) 00:16:41.832 fused_ordering(136) 00:16:41.832 fused_ordering(137) 00:16:41.832 fused_ordering(138) 00:16:41.832 fused_ordering(139) 00:16:41.832 fused_ordering(140) 00:16:41.832 fused_ordering(141) 00:16:41.832 fused_ordering(142) 00:16:41.832 fused_ordering(143) 00:16:41.832 fused_ordering(144) 00:16:41.832 fused_ordering(145) 00:16:41.832 fused_ordering(146) 00:16:41.832 fused_ordering(147) 00:16:41.832 fused_ordering(148) 00:16:41.832 fused_ordering(149) 00:16:41.832 fused_ordering(150) 00:16:41.832 fused_ordering(151) 00:16:41.832 fused_ordering(152) 00:16:41.832 fused_ordering(153) 00:16:41.832 fused_ordering(154) 00:16:41.832 fused_ordering(155) 00:16:41.832 fused_ordering(156) 00:16:41.832 fused_ordering(157) 00:16:41.832 fused_ordering(158) 00:16:41.832 fused_ordering(159) 00:16:41.832 fused_ordering(160) 00:16:41.832 fused_ordering(161) 00:16:41.832 fused_ordering(162) 00:16:41.832 fused_ordering(163) 00:16:41.832 fused_ordering(164) 00:16:41.832 fused_ordering(165) 00:16:41.832 fused_ordering(166) 00:16:41.832 fused_ordering(167) 00:16:41.832 fused_ordering(168) 00:16:41.832 fused_ordering(169) 00:16:41.832 fused_ordering(170) 00:16:41.832 fused_ordering(171) 00:16:41.832 fused_ordering(172) 00:16:41.832 fused_ordering(173) 00:16:41.832 fused_ordering(174) 00:16:41.832 fused_ordering(175) 00:16:41.832 fused_ordering(176) 00:16:41.832 fused_ordering(177) 00:16:41.832 fused_ordering(178) 00:16:41.832 fused_ordering(179) 00:16:41.832 fused_ordering(180) 00:16:41.832 fused_ordering(181) 00:16:41.832 fused_ordering(182) 00:16:41.832 fused_ordering(183) 00:16:41.832 fused_ordering(184) 00:16:41.832 fused_ordering(185) 00:16:41.832 fused_ordering(186) 00:16:41.832 fused_ordering(187) 00:16:41.832 fused_ordering(188) 00:16:41.832 fused_ordering(189) 00:16:41.832 fused_ordering(190) 00:16:41.832 fused_ordering(191) 00:16:41.832 fused_ordering(192) 00:16:41.832 fused_ordering(193) 00:16:41.832 fused_ordering(194) 00:16:41.832 fused_ordering(195) 00:16:41.832 fused_ordering(196) 00:16:41.832 fused_ordering(197) 00:16:41.832 fused_ordering(198) 00:16:41.832 fused_ordering(199) 00:16:41.832 fused_ordering(200) 00:16:41.832 fused_ordering(201) 00:16:41.832 fused_ordering(202) 00:16:41.832 fused_ordering(203) 00:16:41.832 fused_ordering(204) 00:16:41.832 fused_ordering(205) 00:16:42.091 fused_ordering(206) 00:16:42.091 fused_ordering(207) 00:16:42.091 fused_ordering(208) 00:16:42.091 fused_ordering(209) 00:16:42.091 fused_ordering(210) 00:16:42.091 fused_ordering(211) 00:16:42.091 fused_ordering(212) 00:16:42.091 fused_ordering(213) 00:16:42.091 fused_ordering(214) 00:16:42.091 fused_ordering(215) 00:16:42.091 fused_ordering(216) 00:16:42.091 fused_ordering(217) 00:16:42.091 fused_ordering(218) 00:16:42.091 fused_ordering(219) 00:16:42.091 fused_ordering(220) 00:16:42.091 fused_ordering(221) 00:16:42.091 fused_ordering(222) 00:16:42.091 fused_ordering(223) 00:16:42.091 fused_ordering(224) 00:16:42.091 fused_ordering(225) 00:16:42.091 fused_ordering(226) 00:16:42.091 fused_ordering(227) 00:16:42.091 fused_ordering(228) 00:16:42.091 fused_ordering(229) 00:16:42.091 fused_ordering(230) 00:16:42.091 fused_ordering(231) 00:16:42.091 fused_ordering(232) 00:16:42.091 fused_ordering(233) 00:16:42.091 fused_ordering(234) 00:16:42.091 fused_ordering(235) 00:16:42.091 fused_ordering(236) 00:16:42.091 fused_ordering(237) 00:16:42.091 fused_ordering(238) 00:16:42.091 fused_ordering(239) 00:16:42.091 fused_ordering(240) 00:16:42.091 fused_ordering(241) 00:16:42.091 fused_ordering(242) 00:16:42.091 fused_ordering(243) 00:16:42.091 fused_ordering(244) 00:16:42.091 fused_ordering(245) 00:16:42.091 fused_ordering(246) 00:16:42.091 fused_ordering(247) 00:16:42.091 fused_ordering(248) 00:16:42.091 fused_ordering(249) 00:16:42.091 fused_ordering(250) 00:16:42.091 fused_ordering(251) 00:16:42.091 fused_ordering(252) 00:16:42.091 fused_ordering(253) 00:16:42.091 fused_ordering(254) 00:16:42.091 fused_ordering(255) 00:16:42.091 fused_ordering(256) 00:16:42.091 fused_ordering(257) 00:16:42.091 fused_ordering(258) 00:16:42.091 fused_ordering(259) 00:16:42.091 fused_ordering(260) 00:16:42.091 fused_ordering(261) 00:16:42.091 fused_ordering(262) 00:16:42.091 fused_ordering(263) 00:16:42.091 fused_ordering(264) 00:16:42.091 fused_ordering(265) 00:16:42.091 fused_ordering(266) 00:16:42.091 fused_ordering(267) 00:16:42.091 fused_ordering(268) 00:16:42.091 fused_ordering(269) 00:16:42.091 fused_ordering(270) 00:16:42.091 fused_ordering(271) 00:16:42.091 fused_ordering(272) 00:16:42.091 fused_ordering(273) 00:16:42.091 fused_ordering(274) 00:16:42.091 fused_ordering(275) 00:16:42.091 fused_ordering(276) 00:16:42.091 fused_ordering(277) 00:16:42.091 fused_ordering(278) 00:16:42.091 fused_ordering(279) 00:16:42.091 fused_ordering(280) 00:16:42.091 fused_ordering(281) 00:16:42.091 fused_ordering(282) 00:16:42.091 fused_ordering(283) 00:16:42.091 fused_ordering(284) 00:16:42.091 fused_ordering(285) 00:16:42.091 fused_ordering(286) 00:16:42.091 fused_ordering(287) 00:16:42.091 fused_ordering(288) 00:16:42.091 fused_ordering(289) 00:16:42.091 fused_ordering(290) 00:16:42.091 fused_ordering(291) 00:16:42.091 fused_ordering(292) 00:16:42.091 fused_ordering(293) 00:16:42.091 fused_ordering(294) 00:16:42.091 fused_ordering(295) 00:16:42.091 fused_ordering(296) 00:16:42.091 fused_ordering(297) 00:16:42.091 fused_ordering(298) 00:16:42.091 fused_ordering(299) 00:16:42.091 fused_ordering(300) 00:16:42.091 fused_ordering(301) 00:16:42.091 fused_ordering(302) 00:16:42.091 fused_ordering(303) 00:16:42.091 fused_ordering(304) 00:16:42.091 fused_ordering(305) 00:16:42.091 fused_ordering(306) 00:16:42.091 fused_ordering(307) 00:16:42.091 fused_ordering(308) 00:16:42.092 fused_ordering(309) 00:16:42.092 fused_ordering(310) 00:16:42.092 fused_ordering(311) 00:16:42.092 fused_ordering(312) 00:16:42.092 fused_ordering(313) 00:16:42.092 fused_ordering(314) 00:16:42.092 fused_ordering(315) 00:16:42.092 fused_ordering(316) 00:16:42.092 fused_ordering(317) 00:16:42.092 fused_ordering(318) 00:16:42.092 fused_ordering(319) 00:16:42.092 fused_ordering(320) 00:16:42.092 fused_ordering(321) 00:16:42.092 fused_ordering(322) 00:16:42.092 fused_ordering(323) 00:16:42.092 fused_ordering(324) 00:16:42.092 fused_ordering(325) 00:16:42.092 fused_ordering(326) 00:16:42.092 fused_ordering(327) 00:16:42.092 fused_ordering(328) 00:16:42.092 fused_ordering(329) 00:16:42.092 fused_ordering(330) 00:16:42.092 fused_ordering(331) 00:16:42.092 fused_ordering(332) 00:16:42.092 fused_ordering(333) 00:16:42.092 fused_ordering(334) 00:16:42.092 fused_ordering(335) 00:16:42.092 fused_ordering(336) 00:16:42.092 fused_ordering(337) 00:16:42.092 fused_ordering(338) 00:16:42.092 fused_ordering(339) 00:16:42.092 fused_ordering(340) 00:16:42.092 fused_ordering(341) 00:16:42.092 fused_ordering(342) 00:16:42.092 fused_ordering(343) 00:16:42.092 fused_ordering(344) 00:16:42.092 fused_ordering(345) 00:16:42.092 fused_ordering(346) 00:16:42.092 fused_ordering(347) 00:16:42.092 fused_ordering(348) 00:16:42.092 fused_ordering(349) 00:16:42.092 fused_ordering(350) 00:16:42.092 fused_ordering(351) 00:16:42.092 fused_ordering(352) 00:16:42.092 fused_ordering(353) 00:16:42.092 fused_ordering(354) 00:16:42.092 fused_ordering(355) 00:16:42.092 fused_ordering(356) 00:16:42.092 fused_ordering(357) 00:16:42.092 fused_ordering(358) 00:16:42.092 fused_ordering(359) 00:16:42.092 fused_ordering(360) 00:16:42.092 fused_ordering(361) 00:16:42.092 fused_ordering(362) 00:16:42.092 fused_ordering(363) 00:16:42.092 fused_ordering(364) 00:16:42.092 fused_ordering(365) 00:16:42.092 fused_ordering(366) 00:16:42.092 fused_ordering(367) 00:16:42.092 fused_ordering(368) 00:16:42.092 fused_ordering(369) 00:16:42.092 fused_ordering(370) 00:16:42.092 fused_ordering(371) 00:16:42.092 fused_ordering(372) 00:16:42.092 fused_ordering(373) 00:16:42.092 fused_ordering(374) 00:16:42.092 fused_ordering(375) 00:16:42.092 fused_ordering(376) 00:16:42.092 fused_ordering(377) 00:16:42.092 fused_ordering(378) 00:16:42.092 fused_ordering(379) 00:16:42.092 fused_ordering(380) 00:16:42.092 fused_ordering(381) 00:16:42.092 fused_ordering(382) 00:16:42.092 fused_ordering(383) 00:16:42.092 fused_ordering(384) 00:16:42.092 fused_ordering(385) 00:16:42.092 fused_ordering(386) 00:16:42.092 fused_ordering(387) 00:16:42.092 fused_ordering(388) 00:16:42.092 fused_ordering(389) 00:16:42.092 fused_ordering(390) 00:16:42.092 fused_ordering(391) 00:16:42.092 fused_ordering(392) 00:16:42.092 fused_ordering(393) 00:16:42.092 fused_ordering(394) 00:16:42.092 fused_ordering(395) 00:16:42.092 fused_ordering(396) 00:16:42.092 fused_ordering(397) 00:16:42.092 fused_ordering(398) 00:16:42.092 fused_ordering(399) 00:16:42.092 fused_ordering(400) 00:16:42.092 fused_ordering(401) 00:16:42.092 fused_ordering(402) 00:16:42.092 fused_ordering(403) 00:16:42.092 fused_ordering(404) 00:16:42.092 fused_ordering(405) 00:16:42.092 fused_ordering(406) 00:16:42.092 fused_ordering(407) 00:16:42.092 fused_ordering(408) 00:16:42.092 fused_ordering(409) 00:16:42.092 fused_ordering(410) 00:16:42.659 fused_ordering(411) 00:16:42.659 fused_ordering(412) 00:16:42.659 fused_ordering(413) 00:16:42.659 fused_ordering(414) 00:16:42.659 fused_ordering(415) 00:16:42.659 fused_ordering(416) 00:16:42.659 fused_ordering(417) 00:16:42.659 fused_ordering(418) 00:16:42.659 fused_ordering(419) 00:16:42.659 fused_ordering(420) 00:16:42.659 fused_ordering(421) 00:16:42.659 fused_ordering(422) 00:16:42.659 fused_ordering(423) 00:16:42.659 fused_ordering(424) 00:16:42.659 fused_ordering(425) 00:16:42.659 fused_ordering(426) 00:16:42.659 fused_ordering(427) 00:16:42.659 fused_ordering(428) 00:16:42.659 fused_ordering(429) 00:16:42.659 fused_ordering(430) 00:16:42.659 fused_ordering(431) 00:16:42.659 fused_ordering(432) 00:16:42.659 fused_ordering(433) 00:16:42.659 fused_ordering(434) 00:16:42.659 fused_ordering(435) 00:16:42.659 fused_ordering(436) 00:16:42.659 fused_ordering(437) 00:16:42.659 fused_ordering(438) 00:16:42.659 fused_ordering(439) 00:16:42.659 fused_ordering(440) 00:16:42.659 fused_ordering(441) 00:16:42.659 fused_ordering(442) 00:16:42.659 fused_ordering(443) 00:16:42.659 fused_ordering(444) 00:16:42.659 fused_ordering(445) 00:16:42.659 fused_ordering(446) 00:16:42.659 fused_ordering(447) 00:16:42.659 fused_ordering(448) 00:16:42.659 fused_ordering(449) 00:16:42.659 fused_ordering(450) 00:16:42.659 fused_ordering(451) 00:16:42.659 fused_ordering(452) 00:16:42.659 fused_ordering(453) 00:16:42.659 fused_ordering(454) 00:16:42.659 fused_ordering(455) 00:16:42.659 fused_ordering(456) 00:16:42.659 fused_ordering(457) 00:16:42.659 fused_ordering(458) 00:16:42.659 fused_ordering(459) 00:16:42.659 fused_ordering(460) 00:16:42.659 fused_ordering(461) 00:16:42.659 fused_ordering(462) 00:16:42.659 fused_ordering(463) 00:16:42.659 fused_ordering(464) 00:16:42.659 fused_ordering(465) 00:16:42.659 fused_ordering(466) 00:16:42.659 fused_ordering(467) 00:16:42.659 fused_ordering(468) 00:16:42.659 fused_ordering(469) 00:16:42.659 fused_ordering(470) 00:16:42.659 fused_ordering(471) 00:16:42.659 fused_ordering(472) 00:16:42.659 fused_ordering(473) 00:16:42.659 fused_ordering(474) 00:16:42.659 fused_ordering(475) 00:16:42.659 fused_ordering(476) 00:16:42.659 fused_ordering(477) 00:16:42.659 fused_ordering(478) 00:16:42.659 fused_ordering(479) 00:16:42.659 fused_ordering(480) 00:16:42.659 fused_ordering(481) 00:16:42.659 fused_ordering(482) 00:16:42.659 fused_ordering(483) 00:16:42.659 fused_ordering(484) 00:16:42.659 fused_ordering(485) 00:16:42.659 fused_ordering(486) 00:16:42.659 fused_ordering(487) 00:16:42.659 fused_ordering(488) 00:16:42.659 fused_ordering(489) 00:16:42.659 fused_ordering(490) 00:16:42.659 fused_ordering(491) 00:16:42.659 fused_ordering(492) 00:16:42.659 fused_ordering(493) 00:16:42.659 fused_ordering(494) 00:16:42.659 fused_ordering(495) 00:16:42.659 fused_ordering(496) 00:16:42.659 fused_ordering(497) 00:16:42.659 fused_ordering(498) 00:16:42.659 fused_ordering(499) 00:16:42.659 fused_ordering(500) 00:16:42.659 fused_ordering(501) 00:16:42.659 fused_ordering(502) 00:16:42.659 fused_ordering(503) 00:16:42.659 fused_ordering(504) 00:16:42.659 fused_ordering(505) 00:16:42.659 fused_ordering(506) 00:16:42.659 fused_ordering(507) 00:16:42.659 fused_ordering(508) 00:16:42.659 fused_ordering(509) 00:16:42.659 fused_ordering(510) 00:16:42.659 fused_ordering(511) 00:16:42.659 fused_ordering(512) 00:16:42.659 fused_ordering(513) 00:16:42.659 fused_ordering(514) 00:16:42.659 fused_ordering(515) 00:16:42.659 fused_ordering(516) 00:16:42.659 fused_ordering(517) 00:16:42.659 fused_ordering(518) 00:16:42.659 fused_ordering(519) 00:16:42.659 fused_ordering(520) 00:16:42.659 fused_ordering(521) 00:16:42.659 fused_ordering(522) 00:16:42.659 fused_ordering(523) 00:16:42.659 fused_ordering(524) 00:16:42.659 fused_ordering(525) 00:16:42.659 fused_ordering(526) 00:16:42.659 fused_ordering(527) 00:16:42.659 fused_ordering(528) 00:16:42.659 fused_ordering(529) 00:16:42.659 fused_ordering(530) 00:16:42.659 fused_ordering(531) 00:16:42.659 fused_ordering(532) 00:16:42.659 fused_ordering(533) 00:16:42.659 fused_ordering(534) 00:16:42.659 fused_ordering(535) 00:16:42.659 fused_ordering(536) 00:16:42.659 fused_ordering(537) 00:16:42.659 fused_ordering(538) 00:16:42.659 fused_ordering(539) 00:16:42.659 fused_ordering(540) 00:16:42.659 fused_ordering(541) 00:16:42.659 fused_ordering(542) 00:16:42.659 fused_ordering(543) 00:16:42.659 fused_ordering(544) 00:16:42.659 fused_ordering(545) 00:16:42.659 fused_ordering(546) 00:16:42.659 fused_ordering(547) 00:16:42.659 fused_ordering(548) 00:16:42.659 fused_ordering(549) 00:16:42.659 fused_ordering(550) 00:16:42.659 fused_ordering(551) 00:16:42.659 fused_ordering(552) 00:16:42.659 fused_ordering(553) 00:16:42.659 fused_ordering(554) 00:16:42.659 fused_ordering(555) 00:16:42.659 fused_ordering(556) 00:16:42.659 fused_ordering(557) 00:16:42.659 fused_ordering(558) 00:16:42.659 fused_ordering(559) 00:16:42.659 fused_ordering(560) 00:16:42.659 fused_ordering(561) 00:16:42.659 fused_ordering(562) 00:16:42.659 fused_ordering(563) 00:16:42.659 fused_ordering(564) 00:16:42.659 fused_ordering(565) 00:16:42.659 fused_ordering(566) 00:16:42.659 fused_ordering(567) 00:16:42.659 fused_ordering(568) 00:16:42.659 fused_ordering(569) 00:16:42.659 fused_ordering(570) 00:16:42.659 fused_ordering(571) 00:16:42.659 fused_ordering(572) 00:16:42.659 fused_ordering(573) 00:16:42.659 fused_ordering(574) 00:16:42.659 fused_ordering(575) 00:16:42.659 fused_ordering(576) 00:16:42.659 fused_ordering(577) 00:16:42.659 fused_ordering(578) 00:16:42.659 fused_ordering(579) 00:16:42.659 fused_ordering(580) 00:16:42.659 fused_ordering(581) 00:16:42.659 fused_ordering(582) 00:16:42.659 fused_ordering(583) 00:16:42.659 fused_ordering(584) 00:16:42.659 fused_ordering(585) 00:16:42.659 fused_ordering(586) 00:16:42.659 fused_ordering(587) 00:16:42.659 fused_ordering(588) 00:16:42.659 fused_ordering(589) 00:16:42.659 fused_ordering(590) 00:16:42.659 fused_ordering(591) 00:16:42.659 fused_ordering(592) 00:16:42.659 fused_ordering(593) 00:16:42.659 fused_ordering(594) 00:16:42.659 fused_ordering(595) 00:16:42.660 fused_ordering(596) 00:16:42.660 fused_ordering(597) 00:16:42.660 fused_ordering(598) 00:16:42.660 fused_ordering(599) 00:16:42.660 fused_ordering(600) 00:16:42.660 fused_ordering(601) 00:16:42.660 fused_ordering(602) 00:16:42.660 fused_ordering(603) 00:16:42.660 fused_ordering(604) 00:16:42.660 fused_ordering(605) 00:16:42.660 fused_ordering(606) 00:16:42.660 fused_ordering(607) 00:16:42.660 fused_ordering(608) 00:16:42.660 fused_ordering(609) 00:16:42.660 fused_ordering(610) 00:16:42.660 fused_ordering(611) 00:16:42.660 fused_ordering(612) 00:16:42.660 fused_ordering(613) 00:16:42.660 fused_ordering(614) 00:16:42.660 fused_ordering(615) 00:16:43.226 fused_ordering(616) 00:16:43.226 fused_ordering(617) 00:16:43.226 fused_ordering(618) 00:16:43.226 fused_ordering(619) 00:16:43.226 fused_ordering(620) 00:16:43.226 fused_ordering(621) 00:16:43.226 fused_ordering(622) 00:16:43.226 fused_ordering(623) 00:16:43.226 fused_ordering(624) 00:16:43.226 fused_ordering(625) 00:16:43.226 fused_ordering(626) 00:16:43.226 fused_ordering(627) 00:16:43.226 fused_ordering(628) 00:16:43.226 fused_ordering(629) 00:16:43.226 fused_ordering(630) 00:16:43.226 fused_ordering(631) 00:16:43.226 fused_ordering(632) 00:16:43.226 fused_ordering(633) 00:16:43.226 fused_ordering(634) 00:16:43.226 fused_ordering(635) 00:16:43.226 fused_ordering(636) 00:16:43.226 fused_ordering(637) 00:16:43.226 fused_ordering(638) 00:16:43.226 fused_ordering(639) 00:16:43.226 fused_ordering(640) 00:16:43.226 fused_ordering(641) 00:16:43.226 fused_ordering(642) 00:16:43.226 fused_ordering(643) 00:16:43.226 fused_ordering(644) 00:16:43.226 fused_ordering(645) 00:16:43.226 fused_ordering(646) 00:16:43.226 fused_ordering(647) 00:16:43.226 fused_ordering(648) 00:16:43.226 fused_ordering(649) 00:16:43.226 fused_ordering(650) 00:16:43.226 fused_ordering(651) 00:16:43.226 fused_ordering(652) 00:16:43.226 fused_ordering(653) 00:16:43.226 fused_ordering(654) 00:16:43.226 fused_ordering(655) 00:16:43.226 fused_ordering(656) 00:16:43.226 fused_ordering(657) 00:16:43.226 fused_ordering(658) 00:16:43.226 fused_ordering(659) 00:16:43.226 fused_ordering(660) 00:16:43.226 fused_ordering(661) 00:16:43.226 fused_ordering(662) 00:16:43.226 fused_ordering(663) 00:16:43.226 fused_ordering(664) 00:16:43.226 fused_ordering(665) 00:16:43.226 fused_ordering(666) 00:16:43.226 fused_ordering(667) 00:16:43.226 fused_ordering(668) 00:16:43.226 fused_ordering(669) 00:16:43.226 fused_ordering(670) 00:16:43.226 fused_ordering(671) 00:16:43.226 fused_ordering(672) 00:16:43.226 fused_ordering(673) 00:16:43.226 fused_ordering(674) 00:16:43.226 fused_ordering(675) 00:16:43.226 fused_ordering(676) 00:16:43.226 fused_ordering(677) 00:16:43.226 fused_ordering(678) 00:16:43.226 fused_ordering(679) 00:16:43.226 fused_ordering(680) 00:16:43.226 fused_ordering(681) 00:16:43.226 fused_ordering(682) 00:16:43.226 fused_ordering(683) 00:16:43.226 fused_ordering(684) 00:16:43.226 fused_ordering(685) 00:16:43.226 fused_ordering(686) 00:16:43.226 fused_ordering(687) 00:16:43.226 fused_ordering(688) 00:16:43.226 fused_ordering(689) 00:16:43.226 fused_ordering(690) 00:16:43.226 fused_ordering(691) 00:16:43.226 fused_ordering(692) 00:16:43.226 fused_ordering(693) 00:16:43.226 fused_ordering(694) 00:16:43.226 fused_ordering(695) 00:16:43.226 fused_ordering(696) 00:16:43.226 fused_ordering(697) 00:16:43.226 fused_ordering(698) 00:16:43.226 fused_ordering(699) 00:16:43.226 fused_ordering(700) 00:16:43.226 fused_ordering(701) 00:16:43.226 fused_ordering(702) 00:16:43.226 fused_ordering(703) 00:16:43.226 fused_ordering(704) 00:16:43.226 fused_ordering(705) 00:16:43.226 fused_ordering(706) 00:16:43.226 fused_ordering(707) 00:16:43.226 fused_ordering(708) 00:16:43.226 fused_ordering(709) 00:16:43.226 fused_ordering(710) 00:16:43.226 fused_ordering(711) 00:16:43.226 fused_ordering(712) 00:16:43.226 fused_ordering(713) 00:16:43.226 fused_ordering(714) 00:16:43.226 fused_ordering(715) 00:16:43.226 fused_ordering(716) 00:16:43.226 fused_ordering(717) 00:16:43.226 fused_ordering(718) 00:16:43.226 fused_ordering(719) 00:16:43.226 fused_ordering(720) 00:16:43.226 fused_ordering(721) 00:16:43.226 fused_ordering(722) 00:16:43.226 fused_ordering(723) 00:16:43.226 fused_ordering(724) 00:16:43.226 fused_ordering(725) 00:16:43.226 fused_ordering(726) 00:16:43.226 fused_ordering(727) 00:16:43.226 fused_ordering(728) 00:16:43.226 fused_ordering(729) 00:16:43.226 fused_ordering(730) 00:16:43.226 fused_ordering(731) 00:16:43.226 fused_ordering(732) 00:16:43.226 fused_ordering(733) 00:16:43.226 fused_ordering(734) 00:16:43.226 fused_ordering(735) 00:16:43.226 fused_ordering(736) 00:16:43.226 fused_ordering(737) 00:16:43.226 fused_ordering(738) 00:16:43.226 fused_ordering(739) 00:16:43.226 fused_ordering(740) 00:16:43.226 fused_ordering(741) 00:16:43.226 fused_ordering(742) 00:16:43.226 fused_ordering(743) 00:16:43.226 fused_ordering(744) 00:16:43.226 fused_ordering(745) 00:16:43.226 fused_ordering(746) 00:16:43.226 fused_ordering(747) 00:16:43.226 fused_ordering(748) 00:16:43.226 fused_ordering(749) 00:16:43.226 fused_ordering(750) 00:16:43.226 fused_ordering(751) 00:16:43.226 fused_ordering(752) 00:16:43.226 fused_ordering(753) 00:16:43.226 fused_ordering(754) 00:16:43.226 fused_ordering(755) 00:16:43.226 fused_ordering(756) 00:16:43.226 fused_ordering(757) 00:16:43.226 fused_ordering(758) 00:16:43.226 fused_ordering(759) 00:16:43.226 fused_ordering(760) 00:16:43.226 fused_ordering(761) 00:16:43.226 fused_ordering(762) 00:16:43.226 fused_ordering(763) 00:16:43.226 fused_ordering(764) 00:16:43.226 fused_ordering(765) 00:16:43.226 fused_ordering(766) 00:16:43.226 fused_ordering(767) 00:16:43.226 fused_ordering(768) 00:16:43.226 fused_ordering(769) 00:16:43.226 fused_ordering(770) 00:16:43.226 fused_ordering(771) 00:16:43.226 fused_ordering(772) 00:16:43.226 fused_ordering(773) 00:16:43.226 fused_ordering(774) 00:16:43.226 fused_ordering(775) 00:16:43.226 fused_ordering(776) 00:16:43.226 fused_ordering(777) 00:16:43.226 fused_ordering(778) 00:16:43.226 fused_ordering(779) 00:16:43.226 fused_ordering(780) 00:16:43.226 fused_ordering(781) 00:16:43.226 fused_ordering(782) 00:16:43.226 fused_ordering(783) 00:16:43.226 fused_ordering(784) 00:16:43.226 fused_ordering(785) 00:16:43.226 fused_ordering(786) 00:16:43.226 fused_ordering(787) 00:16:43.226 fused_ordering(788) 00:16:43.226 fused_ordering(789) 00:16:43.226 fused_ordering(790) 00:16:43.226 fused_ordering(791) 00:16:43.226 fused_ordering(792) 00:16:43.226 fused_ordering(793) 00:16:43.226 fused_ordering(794) 00:16:43.226 fused_ordering(795) 00:16:43.226 fused_ordering(796) 00:16:43.226 fused_ordering(797) 00:16:43.226 fused_ordering(798) 00:16:43.226 fused_ordering(799) 00:16:43.226 fused_ordering(800) 00:16:43.226 fused_ordering(801) 00:16:43.226 fused_ordering(802) 00:16:43.227 fused_ordering(803) 00:16:43.227 fused_ordering(804) 00:16:43.227 fused_ordering(805) 00:16:43.227 fused_ordering(806) 00:16:43.227 fused_ordering(807) 00:16:43.227 fused_ordering(808) 00:16:43.227 fused_ordering(809) 00:16:43.227 fused_ordering(810) 00:16:43.227 fused_ordering(811) 00:16:43.227 fused_ordering(812) 00:16:43.227 fused_ordering(813) 00:16:43.227 fused_ordering(814) 00:16:43.227 fused_ordering(815) 00:16:43.227 fused_ordering(816) 00:16:43.227 fused_ordering(817) 00:16:43.227 fused_ordering(818) 00:16:43.227 fused_ordering(819) 00:16:43.227 fused_ordering(820) 00:16:44.161 fused_ordering(821) 00:16:44.161 fused_ordering(822) 00:16:44.161 fused_ordering(823) 00:16:44.161 fused_ordering(824) 00:16:44.161 fused_ordering(825) 00:16:44.161 fused_ordering(826) 00:16:44.161 fused_ordering(827) 00:16:44.161 fused_ordering(828) 00:16:44.161 fused_ordering(829) 00:16:44.161 fused_ordering(830) 00:16:44.161 fused_ordering(831) 00:16:44.161 fused_ordering(832) 00:16:44.161 fused_ordering(833) 00:16:44.161 fused_ordering(834) 00:16:44.161 fused_ordering(835) 00:16:44.161 fused_ordering(836) 00:16:44.161 fused_ordering(837) 00:16:44.161 fused_ordering(838) 00:16:44.161 fused_ordering(839) 00:16:44.161 fused_ordering(840) 00:16:44.161 fused_ordering(841) 00:16:44.161 fused_ordering(842) 00:16:44.161 fused_ordering(843) 00:16:44.161 fused_ordering(844) 00:16:44.161 fused_ordering(845) 00:16:44.161 fused_ordering(846) 00:16:44.161 fused_ordering(847) 00:16:44.161 fused_ordering(848) 00:16:44.161 fused_ordering(849) 00:16:44.161 fused_ordering(850) 00:16:44.161 fused_ordering(851) 00:16:44.161 fused_ordering(852) 00:16:44.161 fused_ordering(853) 00:16:44.161 fused_ordering(854) 00:16:44.161 fused_ordering(855) 00:16:44.161 fused_ordering(856) 00:16:44.161 fused_ordering(857) 00:16:44.161 fused_ordering(858) 00:16:44.161 fused_ordering(859) 00:16:44.161 fused_ordering(860) 00:16:44.161 fused_ordering(861) 00:16:44.161 fused_ordering(862) 00:16:44.161 fused_ordering(863) 00:16:44.161 fused_ordering(864) 00:16:44.161 fused_ordering(865) 00:16:44.161 fused_ordering(866) 00:16:44.161 fused_ordering(867) 00:16:44.161 fused_ordering(868) 00:16:44.161 fused_ordering(869) 00:16:44.161 fused_ordering(870) 00:16:44.161 fused_ordering(871) 00:16:44.161 fused_ordering(872) 00:16:44.161 fused_ordering(873) 00:16:44.161 fused_ordering(874) 00:16:44.161 fused_ordering(875) 00:16:44.161 fused_ordering(876) 00:16:44.161 fused_ordering(877) 00:16:44.161 fused_ordering(878) 00:16:44.161 fused_ordering(879) 00:16:44.161 fused_ordering(880) 00:16:44.161 fused_ordering(881) 00:16:44.161 fused_ordering(882) 00:16:44.161 fused_ordering(883) 00:16:44.161 fused_ordering(884) 00:16:44.161 fused_ordering(885) 00:16:44.161 fused_ordering(886) 00:16:44.161 fused_ordering(887) 00:16:44.161 fused_ordering(888) 00:16:44.161 fused_ordering(889) 00:16:44.161 fused_ordering(890) 00:16:44.161 fused_ordering(891) 00:16:44.161 fused_ordering(892) 00:16:44.161 fused_ordering(893) 00:16:44.161 fused_ordering(894) 00:16:44.161 fused_ordering(895) 00:16:44.161 fused_ordering(896) 00:16:44.161 fused_ordering(897) 00:16:44.161 fused_ordering(898) 00:16:44.161 fused_ordering(899) 00:16:44.161 fused_ordering(900) 00:16:44.161 fused_ordering(901) 00:16:44.161 fused_ordering(902) 00:16:44.161 fused_ordering(903) 00:16:44.161 fused_ordering(904) 00:16:44.161 fused_ordering(905) 00:16:44.161 fused_ordering(906) 00:16:44.161 fused_ordering(907) 00:16:44.161 fused_ordering(908) 00:16:44.161 fused_ordering(909) 00:16:44.161 fused_ordering(910) 00:16:44.161 fused_ordering(911) 00:16:44.161 fused_ordering(912) 00:16:44.161 fused_ordering(913) 00:16:44.161 fused_ordering(914) 00:16:44.161 fused_ordering(915) 00:16:44.161 fused_ordering(916) 00:16:44.161 fused_ordering(917) 00:16:44.161 fused_ordering(918) 00:16:44.161 fused_ordering(919) 00:16:44.161 fused_ordering(920) 00:16:44.161 fused_ordering(921) 00:16:44.161 fused_ordering(922) 00:16:44.161 fused_ordering(923) 00:16:44.161 fused_ordering(924) 00:16:44.161 fused_ordering(925) 00:16:44.161 fused_ordering(926) 00:16:44.161 fused_ordering(927) 00:16:44.161 fused_ordering(928) 00:16:44.161 fused_ordering(929) 00:16:44.161 fused_ordering(930) 00:16:44.161 fused_ordering(931) 00:16:44.161 fused_ordering(932) 00:16:44.161 fused_ordering(933) 00:16:44.161 fused_ordering(934) 00:16:44.161 fused_ordering(935) 00:16:44.161 fused_ordering(936) 00:16:44.161 fused_ordering(937) 00:16:44.161 fused_ordering(938) 00:16:44.161 fused_ordering(939) 00:16:44.161 fused_ordering(940) 00:16:44.161 fused_ordering(941) 00:16:44.161 fused_ordering(942) 00:16:44.161 fused_ordering(943) 00:16:44.161 fused_ordering(944) 00:16:44.161 fused_ordering(945) 00:16:44.161 fused_ordering(946) 00:16:44.161 fused_ordering(947) 00:16:44.161 fused_ordering(948) 00:16:44.161 fused_ordering(949) 00:16:44.162 fused_ordering(950) 00:16:44.162 fused_ordering(951) 00:16:44.162 fused_ordering(952) 00:16:44.162 fused_ordering(953) 00:16:44.162 fused_ordering(954) 00:16:44.162 fused_ordering(955) 00:16:44.162 fused_ordering(956) 00:16:44.162 fused_ordering(957) 00:16:44.162 fused_ordering(958) 00:16:44.162 fused_ordering(959) 00:16:44.162 fused_ordering(960) 00:16:44.162 fused_ordering(961) 00:16:44.162 fused_ordering(962) 00:16:44.162 fused_ordering(963) 00:16:44.162 fused_ordering(964) 00:16:44.162 fused_ordering(965) 00:16:44.162 fused_ordering(966) 00:16:44.162 fused_ordering(967) 00:16:44.162 fused_ordering(968) 00:16:44.162 fused_ordering(969) 00:16:44.162 fused_ordering(970) 00:16:44.162 fused_ordering(971) 00:16:44.162 fused_ordering(972) 00:16:44.162 fused_ordering(973) 00:16:44.162 fused_ordering(974) 00:16:44.162 fused_ordering(975) 00:16:44.162 fused_ordering(976) 00:16:44.162 fused_ordering(977) 00:16:44.162 fused_ordering(978) 00:16:44.162 fused_ordering(979) 00:16:44.162 fused_ordering(980) 00:16:44.162 fused_ordering(981) 00:16:44.162 fused_ordering(982) 00:16:44.162 fused_ordering(983) 00:16:44.162 fused_ordering(984) 00:16:44.162 fused_ordering(985) 00:16:44.162 fused_ordering(986) 00:16:44.162 fused_ordering(987) 00:16:44.162 fused_ordering(988) 00:16:44.162 fused_ordering(989) 00:16:44.162 fused_ordering(990) 00:16:44.162 fused_ordering(991) 00:16:44.162 fused_ordering(992) 00:16:44.162 fused_ordering(993) 00:16:44.162 fused_ordering(994) 00:16:44.162 fused_ordering(995) 00:16:44.162 fused_ordering(996) 00:16:44.162 fused_ordering(997) 00:16:44.162 fused_ordering(998) 00:16:44.162 fused_ordering(999) 00:16:44.162 fused_ordering(1000) 00:16:44.162 fused_ordering(1001) 00:16:44.162 fused_ordering(1002) 00:16:44.162 fused_ordering(1003) 00:16:44.162 fused_ordering(1004) 00:16:44.162 fused_ordering(1005) 00:16:44.162 fused_ordering(1006) 00:16:44.162 fused_ordering(1007) 00:16:44.162 fused_ordering(1008) 00:16:44.162 fused_ordering(1009) 00:16:44.162 fused_ordering(1010) 00:16:44.162 fused_ordering(1011) 00:16:44.162 fused_ordering(1012) 00:16:44.162 fused_ordering(1013) 00:16:44.162 fused_ordering(1014) 00:16:44.162 fused_ordering(1015) 00:16:44.162 fused_ordering(1016) 00:16:44.162 fused_ordering(1017) 00:16:44.162 fused_ordering(1018) 00:16:44.162 fused_ordering(1019) 00:16:44.162 fused_ordering(1020) 00:16:44.162 fused_ordering(1021) 00:16:44.162 fused_ordering(1022) 00:16:44.162 fused_ordering(1023) 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.162 rmmod nvme_tcp 00:16:44.162 rmmod nvme_fabrics 00:16:44.162 rmmod nvme_keyring 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3800569 ']' 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3800569 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3800569 ']' 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3800569 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3800569 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3800569' 00:16:44.162 killing process with pid 3800569 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3800569 00:16:44.162 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3800569 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.420 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:46.954 00:16:46.954 real 0m8.186s 00:16:46.954 user 0m5.628s 00:16:46.954 sys 0m3.717s 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:46.954 ************************************ 00:16:46.954 END TEST nvmf_fused_ordering 00:16:46.954 ************************************ 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.954 ************************************ 00:16:46.954 START TEST nvmf_ns_masking 00:16:46.954 ************************************ 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:46.954 * Looking for test storage... 00:16:46.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.954 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.954 --rc genhtml_branch_coverage=1 00:16:46.954 --rc genhtml_function_coverage=1 00:16:46.954 --rc genhtml_legend=1 00:16:46.954 --rc geninfo_all_blocks=1 00:16:46.954 --rc geninfo_unexecuted_blocks=1 00:16:46.954 00:16:46.954 ' 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.954 --rc genhtml_branch_coverage=1 00:16:46.954 --rc genhtml_function_coverage=1 00:16:46.954 --rc genhtml_legend=1 00:16:46.954 --rc geninfo_all_blocks=1 00:16:46.954 --rc geninfo_unexecuted_blocks=1 00:16:46.954 00:16:46.954 ' 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.954 --rc genhtml_branch_coverage=1 00:16:46.954 --rc genhtml_function_coverage=1 00:16:46.954 --rc genhtml_legend=1 00:16:46.954 --rc geninfo_all_blocks=1 00:16:46.954 --rc geninfo_unexecuted_blocks=1 00:16:46.954 00:16:46.954 ' 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:46.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.954 --rc genhtml_branch_coverage=1 00:16:46.954 --rc genhtml_function_coverage=1 00:16:46.954 --rc genhtml_legend=1 00:16:46.954 --rc geninfo_all_blocks=1 00:16:46.954 --rc geninfo_unexecuted_blocks=1 00:16:46.954 00:16:46.954 ' 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.954 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fb2f277b-0da2-4c67-a562-f1b6627b7f8c 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=99295c2a-d2ff-4393-9a72-367e1626f718 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=84049359-e0c7-4e1a-8e9a-891b09451b00 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:46.955 11:28:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:48.856 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.856 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:48.856 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.857 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:48.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:16:48.857 00:16:48.857 --- 10.0.0.2 ping statistics --- 00:16:48.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.857 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:16:48.857 00:16:48.857 --- 10.0.0.1 ping statistics --- 00:16:48.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.857 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:48.857 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3802921 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3802921 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3802921 ']' 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.858 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:48.858 [2024-11-02 11:28:49.214381] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:16:48.858 [2024-11-02 11:28:49.214460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.116 [2024-11-02 11:28:49.286689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.116 [2024-11-02 11:28:49.330668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.116 [2024-11-02 11:28:49.330722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.116 [2024-11-02 11:28:49.330745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.116 [2024-11-02 11:28:49.330756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.116 [2024-11-02 11:28:49.330765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.116 [2024-11-02 11:28:49.331356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.116 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.381 [2024-11-02 11:28:49.717928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.381 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:49.381 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:49.381 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:49.948 Malloc1 00:16:49.948 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:50.206 Malloc2 00:16:50.206 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:50.464 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:50.723 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.981 [2024-11-02 11:28:51.286477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.981 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:50.981 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 84049359-e0c7-4e1a-8e9a-891b09451b00 -a 10.0.0.2 -s 4420 -i 4 00:16:51.239 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.239 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:51.239 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.239 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:51.239 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:53.138 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:53.396 [ 0]:0x1 00:16:53.396 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:53.396 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:53.396 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f03d14d3618f4955a88e06817a5bae90 00:16:53.396 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f03d14d3618f4955a88e06817a5bae90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:53.396 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:53.654 [ 0]:0x1 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f03d14d3618f4955a88e06817a5bae90 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f03d14d3618f4955a88e06817a5bae90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:53.654 [ 1]:0x2 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:53.654 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:53.654 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:16:53.654 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:53.654 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:53.654 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.936 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.239 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 84049359-e0c7-4e1a-8e9a-891b09451b00 -a 10.0.0.2 -s 4420 -i 4 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:16:54.522 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:56.421 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:56.679 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:56.679 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:56.680 [ 0]:0x2 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:56.680 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:56.680 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:16:56.680 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:56.680 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:56.938 [ 0]:0x1 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f03d14d3618f4955a88e06817a5bae90 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f03d14d3618f4955a88e06817a5bae90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:56.938 [ 1]:0x2 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:56.938 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.196 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:16:57.196 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.196 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:57.455 [ 0]:0x2 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.455 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:57.713 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:57.713 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 84049359-e0c7-4e1a-8e9a-891b09451b00 -a 10.0.0.2 -s 4420 -i 4 00:16:57.971 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:57.971 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:16:57.971 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.971 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:16:57.971 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:16:57.971 11:28:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.497 [ 0]:0x1 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f03d14d3618f4955a88e06817a5bae90 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f03d14d3618f4955a88e06817a5bae90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:00.497 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.498 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:00.498 [ 1]:0x2 00:17:00.498 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:00.498 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.498 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:17:00.498 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.498 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:00.756 11:29:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:00.756 [ 0]:0x2 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.756 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:00.757 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:01.015 [2024-11-02 11:29:01.357098] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:01.015 request: 00:17:01.015 { 00:17:01.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.015 "nsid": 2, 00:17:01.015 "host": "nqn.2016-06.io.spdk:host1", 00:17:01.015 "method": "nvmf_ns_remove_host", 00:17:01.015 "req_id": 1 00:17:01.015 } 00:17:01.015 Got JSON-RPC error response 00:17:01.015 response: 00:17:01.015 { 00:17:01.015 "code": -32602, 00:17:01.015 "message": "Invalid parameters" 00:17:01.015 } 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:01.015 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:01.273 [ 0]:0x2 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=931a00c9d1e94ee9939280f44c9d2867 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 931a00c9d1e94ee9939280f44c9d2867 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3804548 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3804548 /var/tmp/host.sock 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3804548 ']' 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:01.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.273 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.532 [2024-11-02 11:29:01.718458] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:17:01.532 [2024-11-02 11:29:01.718554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804548 ] 00:17:01.532 [2024-11-02 11:29:01.785786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.532 [2024-11-02 11:29:01.833182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.790 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:01.790 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:17:01.790 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:02.048 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:02.306 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fb2f277b-0da2-4c67-a562-f1b6627b7f8c 00:17:02.306 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:02.306 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FB2F277B0DA24C67A562F1B6627B7F8C -i 00:17:02.872 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 99295c2a-d2ff-4393-9a72-367e1626f718 00:17:02.872 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:02.872 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 99295C2AD2FF43939A72367E1626F718 -i 00:17:02.872 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:03.130 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:03.696 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:03.696 11:29:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:03.953 nvme0n1 00:17:03.953 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:03.953 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:04.519 nvme1n2 00:17:04.519 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:04.519 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:04.519 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:04.519 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:04.519 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:04.777 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:04.777 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:04.777 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:04.777 11:29:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:05.035 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fb2f277b-0da2-4c67-a562-f1b6627b7f8c == \f\b\2\f\2\7\7\b\-\0\d\a\2\-\4\c\6\7\-\a\5\6\2\-\f\1\b\6\6\2\7\b\7\f\8\c ]] 00:17:05.035 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:05.035 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:05.035 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:05.293 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 99295c2a-d2ff-4393-9a72-367e1626f718 == \9\9\2\9\5\c\2\a\-\d\2\f\f\-\4\3\9\3\-\9\a\7\2\-\3\6\7\e\1\6\2\6\f\7\1\8 ]] 00:17:05.293 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.550 11:29:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid fb2f277b-0da2-4c67-a562-f1b6627b7f8c 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FB2F277B0DA24C67A562F1B6627B7F8C 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FB2F277B0DA24C67A562F1B6627B7F8C 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:05.809 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FB2F277B0DA24C67A562F1B6627B7F8C 00:17:06.067 [2024-11-02 11:29:06.323565] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:06.067 [2024-11-02 11:29:06.323623] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:06.067 [2024-11-02 11:29:06.323641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.067 request: 00:17:06.067 { 00:17:06.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.067 "namespace": { 00:17:06.067 "bdev_name": "invalid", 00:17:06.067 "nsid": 1, 00:17:06.067 "nguid": "FB2F277B0DA24C67A562F1B6627B7F8C", 00:17:06.067 "no_auto_visible": false 00:17:06.067 }, 00:17:06.067 "method": "nvmf_subsystem_add_ns", 00:17:06.067 "req_id": 1 00:17:06.067 } 00:17:06.067 Got JSON-RPC error response 00:17:06.067 response: 00:17:06.067 { 00:17:06.067 "code": -32602, 00:17:06.067 "message": "Invalid parameters" 00:17:06.067 } 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid fb2f277b-0da2-4c67-a562-f1b6627b7f8c 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:06.067 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FB2F277B0DA24C67A562F1B6627B7F8C -i 00:17:06.325 11:29:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:08.224 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:08.224 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:08.224 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3804548 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3804548 ']' 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3804548 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3804548 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3804548' 00:17:08.790 killing process with pid 3804548 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3804548 00:17:08.790 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3804548 00:17:09.048 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.306 rmmod nvme_tcp 00:17:09.306 rmmod nvme_fabrics 00:17:09.306 rmmod nvme_keyring 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:09.306 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3802921 ']' 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3802921 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3802921 ']' 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3802921 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3802921 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3802921' 00:17:09.307 killing process with pid 3802921 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3802921 00:17:09.307 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3802921 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.565 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.823 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.823 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.823 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.823 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.823 11:29:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.728 00:17:11.728 real 0m25.173s 00:17:11.728 user 0m36.786s 00:17:11.728 sys 0m4.499s 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:11.728 ************************************ 00:17:11.728 END TEST nvmf_ns_masking 00:17:11.728 ************************************ 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.728 ************************************ 00:17:11.728 START TEST nvmf_nvme_cli 00:17:11.728 ************************************ 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:11.728 * Looking for test storage... 00:17:11.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:17:11.728 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:11.987 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:11.987 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.987 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.987 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.988 --rc genhtml_branch_coverage=1 00:17:11.988 --rc genhtml_function_coverage=1 00:17:11.988 --rc genhtml_legend=1 00:17:11.988 --rc geninfo_all_blocks=1 00:17:11.988 --rc geninfo_unexecuted_blocks=1 00:17:11.988 00:17:11.988 ' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.988 --rc genhtml_branch_coverage=1 00:17:11.988 --rc genhtml_function_coverage=1 00:17:11.988 --rc genhtml_legend=1 00:17:11.988 --rc geninfo_all_blocks=1 00:17:11.988 --rc geninfo_unexecuted_blocks=1 00:17:11.988 00:17:11.988 ' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.988 --rc genhtml_branch_coverage=1 00:17:11.988 --rc genhtml_function_coverage=1 00:17:11.988 --rc genhtml_legend=1 00:17:11.988 --rc geninfo_all_blocks=1 00:17:11.988 --rc geninfo_unexecuted_blocks=1 00:17:11.988 00:17:11.988 ' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.988 --rc genhtml_branch_coverage=1 00:17:11.988 --rc genhtml_function_coverage=1 00:17:11.988 --rc genhtml_legend=1 00:17:11.988 --rc geninfo_all_blocks=1 00:17:11.988 --rc geninfo_unexecuted_blocks=1 00:17:11.988 00:17:11.988 ' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.988 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.989 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:11.989 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:11.989 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.989 11:29:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:13.894 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:13.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:13.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:13.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:13.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.895 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:17:14.154 00:17:14.154 --- 10.0.0.2 ping statistics --- 00:17:14.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.154 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:14.154 00:17:14.154 --- 10.0.0.1 ping statistics --- 00:17:14.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.154 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.154 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3807462 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3807462 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3807462 ']' 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:14.155 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.155 [2024-11-02 11:29:14.454205] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:17:14.155 [2024-11-02 11:29:14.454323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.155 [2024-11-02 11:29:14.535644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.413 [2024-11-02 11:29:14.589268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.413 [2024-11-02 11:29:14.589332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.413 [2024-11-02 11:29:14.589357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.413 [2024-11-02 11:29:14.589372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.413 [2024-11-02 11:29:14.589384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.413 [2024-11-02 11:29:14.591129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.413 [2024-11-02 11:29:14.591187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.413 [2024-11-02 11:29:14.591249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.413 [2024-11-02 11:29:14.591250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.413 [2024-11-02 11:29:14.736859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.413 Malloc0 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.413 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.413 Malloc1 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 [2024-11-02 11:29:14.843169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.671 11:29:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:14.671 00:17:14.671 Discovery Log Number of Records 2, Generation counter 2 00:17:14.671 =====Discovery Log Entry 0====== 00:17:14.671 trtype: tcp 00:17:14.671 adrfam: ipv4 00:17:14.671 subtype: current discovery subsystem 00:17:14.671 treq: not required 00:17:14.671 portid: 0 00:17:14.671 trsvcid: 4420 00:17:14.671 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:14.672 traddr: 10.0.0.2 00:17:14.672 eflags: explicit discovery connections, duplicate discovery information 00:17:14.672 sectype: none 00:17:14.672 =====Discovery Log Entry 1====== 00:17:14.672 trtype: tcp 00:17:14.672 adrfam: ipv4 00:17:14.672 subtype: nvme subsystem 00:17:14.672 treq: not required 00:17:14.672 portid: 0 00:17:14.672 trsvcid: 4420 00:17:14.672 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:14.672 traddr: 10.0.0.2 00:17:14.672 eflags: none 00:17:14.672 sectype: none 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:14.672 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.605 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:15.605 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:17:15.605 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.605 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:17:15.605 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:17:15.605 11:29:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:17.503 /dev/nvme0n2 ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.503 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:17.503 rmmod nvme_tcp 00:17:17.761 rmmod nvme_fabrics 00:17:17.761 rmmod nvme_keyring 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3807462 ']' 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3807462 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3807462 ']' 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3807462 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3807462 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3807462' 00:17:17.761 killing process with pid 3807462 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3807462 00:17:17.761 11:29:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3807462 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.021 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.929 00:17:19.929 real 0m8.220s 00:17:19.929 user 0m15.120s 00:17:19.929 sys 0m2.258s 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:19.929 ************************************ 00:17:19.929 END TEST nvmf_nvme_cli 00:17:19.929 ************************************ 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:19.929 11:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.188 ************************************ 00:17:20.188 START TEST nvmf_vfio_user 00:17:20.188 ************************************ 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:20.188 * Looking for test storage... 00:17:20.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:20.188 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.189 --rc genhtml_branch_coverage=1 00:17:20.189 --rc genhtml_function_coverage=1 00:17:20.189 --rc genhtml_legend=1 00:17:20.189 --rc geninfo_all_blocks=1 00:17:20.189 --rc geninfo_unexecuted_blocks=1 00:17:20.189 00:17:20.189 ' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.189 --rc genhtml_branch_coverage=1 00:17:20.189 --rc genhtml_function_coverage=1 00:17:20.189 --rc genhtml_legend=1 00:17:20.189 --rc geninfo_all_blocks=1 00:17:20.189 --rc geninfo_unexecuted_blocks=1 00:17:20.189 00:17:20.189 ' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.189 --rc genhtml_branch_coverage=1 00:17:20.189 --rc genhtml_function_coverage=1 00:17:20.189 --rc genhtml_legend=1 00:17:20.189 --rc geninfo_all_blocks=1 00:17:20.189 --rc geninfo_unexecuted_blocks=1 00:17:20.189 00:17:20.189 ' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.189 --rc genhtml_branch_coverage=1 00:17:20.189 --rc genhtml_function_coverage=1 00:17:20.189 --rc genhtml_legend=1 00:17:20.189 --rc geninfo_all_blocks=1 00:17:20.189 --rc geninfo_unexecuted_blocks=1 00:17:20.189 00:17:20.189 ' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3808391 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3808391' 00:17:20.189 Process pid: 3808391 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3808391 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3808391 ']' 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:20.189 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:20.189 [2024-11-02 11:29:20.550773] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:17:20.189 [2024-11-02 11:29:20.550861] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.448 [2024-11-02 11:29:20.651383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.448 [2024-11-02 11:29:20.706651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.448 [2024-11-02 11:29:20.706723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.448 [2024-11-02 11:29:20.706763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.448 [2024-11-02 11:29:20.706786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.448 [2024-11-02 11:29:20.706819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.448 [2024-11-02 11:29:20.708903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.448 [2024-11-02 11:29:20.708966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.448 [2024-11-02 11:29:20.709035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.448 [2024-11-02 11:29:20.709042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.705 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.705 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:17:20.705 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:21.638 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:21.895 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:21.895 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:21.895 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:21.895 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:21.895 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:22.461 Malloc1 00:17:22.461 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:22.719 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:22.976 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:23.233 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:23.233 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:23.233 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:23.491 Malloc2 00:17:23.491 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:23.748 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:24.006 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:24.262 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:24.262 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:24.262 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:24.262 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:24.262 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:24.263 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:24.263 [2024-11-02 11:29:24.610264] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:17:24.263 [2024-11-02 11:29:24.610303] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808824 ] 00:17:24.263 [2024-11-02 11:29:24.659292] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:24.522 [2024-11-02 11:29:24.668679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:24.522 [2024-11-02 11:29:24.668709] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f692f3f2000 00:17:24.522 [2024-11-02 11:29:24.669644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.670639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.671657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.672649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.673652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.674661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.675665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.676667] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:24.522 [2024-11-02 11:29:24.677676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:24.522 [2024-11-02 11:29:24.677696] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f692e0ea000 00:17:24.522 [2024-11-02 11:29:24.678830] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:24.522 [2024-11-02 11:29:24.694446] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:24.522 [2024-11-02 11:29:24.694484] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:24.522 [2024-11-02 11:29:24.696776] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:24.522 [2024-11-02 11:29:24.696825] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:24.522 [2024-11-02 11:29:24.696912] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:24.522 [2024-11-02 11:29:24.696940] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:24.522 [2024-11-02 11:29:24.696951] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:24.522 [2024-11-02 11:29:24.697771] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:24.522 [2024-11-02 11:29:24.697789] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:24.522 [2024-11-02 11:29:24.697801] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:24.522 [2024-11-02 11:29:24.698780] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:24.522 [2024-11-02 11:29:24.698799] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:24.522 [2024-11-02 11:29:24.698813] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:24.522 [2024-11-02 11:29:24.699782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:24.522 [2024-11-02 11:29:24.699800] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:24.522 [2024-11-02 11:29:24.700789] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:24.522 [2024-11-02 11:29:24.700807] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:24.522 [2024-11-02 11:29:24.700815] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:24.522 [2024-11-02 11:29:24.700826] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:24.522 [2024-11-02 11:29:24.700935] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:24.522 [2024-11-02 11:29:24.700947] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:24.522 [2024-11-02 11:29:24.700955] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:24.522 [2024-11-02 11:29:24.701799] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:24.522 [2024-11-02 11:29:24.702800] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:24.522 [2024-11-02 11:29:24.703802] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:24.522 [2024-11-02 11:29:24.704799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:24.522 [2024-11-02 11:29:24.704934] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:24.522 [2024-11-02 11:29:24.705816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:24.522 [2024-11-02 11:29:24.705833] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:24.522 [2024-11-02 11:29:24.705841] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:24.522 [2024-11-02 11:29:24.705864] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:24.523 [2024-11-02 11:29:24.705879] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.705901] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:24.523 [2024-11-02 11:29:24.705910] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:24.523 [2024-11-02 11:29:24.705916] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.523 [2024-11-02 11:29:24.705933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.705989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706005] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:24.523 [2024-11-02 11:29:24.706013] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:24.523 [2024-11-02 11:29:24.706019] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:24.523 [2024-11-02 11:29:24.706027] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:24.523 [2024-11-02 11:29:24.706035] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:24.523 [2024-11-02 11:29:24.706042] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:24.523 [2024-11-02 11:29:24.706049] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706060] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.523 [2024-11-02 11:29:24.706125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.523 [2024-11-02 11:29:24.706136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.523 [2024-11-02 11:29:24.706147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.523 [2024-11-02 11:29:24.706155] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706166] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706203] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:24.523 [2024-11-02 11:29:24.706212] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706222] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706231] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706353] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706369] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706382] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:24.523 [2024-11-02 11:29:24.706391] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:24.523 [2024-11-02 11:29:24.706397] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.523 [2024-11-02 11:29:24.706406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706445] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:24.523 [2024-11-02 11:29:24.706461] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706479] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706491] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:24.523 [2024-11-02 11:29:24.706499] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:24.523 [2024-11-02 11:29:24.706505] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.523 [2024-11-02 11:29:24.706514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706558] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706573] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706600] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:24.523 [2024-11-02 11:29:24.706608] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:24.523 [2024-11-02 11:29:24.706614] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.523 [2024-11-02 11:29:24.706623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706648] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706658] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706672] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706682] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706690] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706698] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706706] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:24.523 [2024-11-02 11:29:24.706713] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:24.523 [2024-11-02 11:29:24.706721] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:24.523 [2024-11-02 11:29:24.706744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:24.523 [2024-11-02 11:29:24.706870] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:24.523 [2024-11-02 11:29:24.706879] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:24.523 [2024-11-02 11:29:24.706885] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:24.523 [2024-11-02 11:29:24.706891] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:24.523 [2024-11-02 11:29:24.706897] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:24.523 [2024-11-02 11:29:24.706905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:24.523 [2024-11-02 11:29:24.706916] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:24.523 [2024-11-02 11:29:24.706924] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:24.523 [2024-11-02 11:29:24.706929] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.523 [2024-11-02 11:29:24.706938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:24.523 [2024-11-02 11:29:24.706948] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:24.523 [2024-11-02 11:29:24.706955] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:24.523 [2024-11-02 11:29:24.706961] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.524 [2024-11-02 11:29:24.706969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:24.524 [2024-11-02 11:29:24.706984] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:24.524 [2024-11-02 11:29:24.706993] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:24.524 [2024-11-02 11:29:24.706999] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:24.524 [2024-11-02 11:29:24.707007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:24.524 [2024-11-02 11:29:24.707018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:24.524 [2024-11-02 11:29:24.707039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:24.524 [2024-11-02 11:29:24.707056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:24.524 [2024-11-02 11:29:24.707068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:24.524 ===================================================== 00:17:24.524 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:24.524 ===================================================== 00:17:24.524 Controller Capabilities/Features 00:17:24.524 ================================ 00:17:24.524 Vendor ID: 4e58 00:17:24.524 Subsystem Vendor ID: 4e58 00:17:24.524 Serial Number: SPDK1 00:17:24.524 Model Number: SPDK bdev Controller 00:17:24.524 Firmware Version: 25.01 00:17:24.524 Recommended Arb Burst: 6 00:17:24.524 IEEE OUI Identifier: 8d 6b 50 00:17:24.524 Multi-path I/O 00:17:24.524 May have multiple subsystem ports: Yes 00:17:24.524 May have multiple controllers: Yes 00:17:24.524 Associated with SR-IOV VF: No 00:17:24.524 Max Data Transfer Size: 131072 00:17:24.524 Max Number of Namespaces: 32 00:17:24.524 Max Number of I/O Queues: 127 00:17:24.524 NVMe Specification Version (VS): 1.3 00:17:24.524 NVMe Specification Version (Identify): 1.3 00:17:24.524 Maximum Queue Entries: 256 00:17:24.524 Contiguous Queues Required: Yes 00:17:24.524 Arbitration Mechanisms Supported 00:17:24.524 Weighted Round Robin: Not Supported 00:17:24.524 Vendor Specific: Not Supported 00:17:24.524 Reset Timeout: 15000 ms 00:17:24.524 Doorbell Stride: 4 bytes 00:17:24.524 NVM Subsystem Reset: Not Supported 00:17:24.524 Command Sets Supported 00:17:24.524 NVM Command Set: Supported 00:17:24.524 Boot Partition: Not Supported 00:17:24.524 Memory Page Size Minimum: 4096 bytes 00:17:24.524 Memory Page Size Maximum: 4096 bytes 00:17:24.524 Persistent Memory Region: Not Supported 00:17:24.524 Optional Asynchronous Events Supported 00:17:24.524 Namespace Attribute Notices: Supported 00:17:24.524 Firmware Activation Notices: Not Supported 00:17:24.524 ANA Change Notices: Not Supported 00:17:24.524 PLE Aggregate Log Change Notices: Not Supported 00:17:24.524 LBA Status Info Alert Notices: Not Supported 00:17:24.524 EGE Aggregate Log Change Notices: Not Supported 00:17:24.524 Normal NVM Subsystem Shutdown event: Not Supported 00:17:24.524 Zone Descriptor Change Notices: Not Supported 00:17:24.524 Discovery Log Change Notices: Not Supported 00:17:24.524 Controller Attributes 00:17:24.524 128-bit Host Identifier: Supported 00:17:24.524 Non-Operational Permissive Mode: Not Supported 00:17:24.524 NVM Sets: Not Supported 00:17:24.524 Read Recovery Levels: Not Supported 00:17:24.524 Endurance Groups: Not Supported 00:17:24.524 Predictable Latency Mode: Not Supported 00:17:24.524 Traffic Based Keep ALive: Not Supported 00:17:24.524 Namespace Granularity: Not Supported 00:17:24.524 SQ Associations: Not Supported 00:17:24.524 UUID List: Not Supported 00:17:24.524 Multi-Domain Subsystem: Not Supported 00:17:24.524 Fixed Capacity Management: Not Supported 00:17:24.524 Variable Capacity Management: Not Supported 00:17:24.524 Delete Endurance Group: Not Supported 00:17:24.524 Delete NVM Set: Not Supported 00:17:24.524 Extended LBA Formats Supported: Not Supported 00:17:24.524 Flexible Data Placement Supported: Not Supported 00:17:24.524 00:17:24.524 Controller Memory Buffer Support 00:17:24.524 ================================ 00:17:24.524 Supported: No 00:17:24.524 00:17:24.524 Persistent Memory Region Support 00:17:24.524 ================================ 00:17:24.524 Supported: No 00:17:24.524 00:17:24.524 Admin Command Set Attributes 00:17:24.524 ============================ 00:17:24.524 Security Send/Receive: Not Supported 00:17:24.524 Format NVM: Not Supported 00:17:24.524 Firmware Activate/Download: Not Supported 00:17:24.524 Namespace Management: Not Supported 00:17:24.524 Device Self-Test: Not Supported 00:17:24.524 Directives: Not Supported 00:17:24.524 NVMe-MI: Not Supported 00:17:24.524 Virtualization Management: Not Supported 00:17:24.524 Doorbell Buffer Config: Not Supported 00:17:24.524 Get LBA Status Capability: Not Supported 00:17:24.524 Command & Feature Lockdown Capability: Not Supported 00:17:24.524 Abort Command Limit: 4 00:17:24.524 Async Event Request Limit: 4 00:17:24.524 Number of Firmware Slots: N/A 00:17:24.524 Firmware Slot 1 Read-Only: N/A 00:17:24.524 Firmware Activation Without Reset: N/A 00:17:24.524 Multiple Update Detection Support: N/A 00:17:24.524 Firmware Update Granularity: No Information Provided 00:17:24.524 Per-Namespace SMART Log: No 00:17:24.524 Asymmetric Namespace Access Log Page: Not Supported 00:17:24.524 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:24.524 Command Effects Log Page: Supported 00:17:24.524 Get Log Page Extended Data: Supported 00:17:24.524 Telemetry Log Pages: Not Supported 00:17:24.524 Persistent Event Log Pages: Not Supported 00:17:24.524 Supported Log Pages Log Page: May Support 00:17:24.524 Commands Supported & Effects Log Page: Not Supported 00:17:24.524 Feature Identifiers & Effects Log Page:May Support 00:17:24.524 NVMe-MI Commands & Effects Log Page: May Support 00:17:24.524 Data Area 4 for Telemetry Log: Not Supported 00:17:24.524 Error Log Page Entries Supported: 128 00:17:24.524 Keep Alive: Supported 00:17:24.524 Keep Alive Granularity: 10000 ms 00:17:24.524 00:17:24.524 NVM Command Set Attributes 00:17:24.524 ========================== 00:17:24.524 Submission Queue Entry Size 00:17:24.524 Max: 64 00:17:24.524 Min: 64 00:17:24.524 Completion Queue Entry Size 00:17:24.524 Max: 16 00:17:24.524 Min: 16 00:17:24.524 Number of Namespaces: 32 00:17:24.524 Compare Command: Supported 00:17:24.524 Write Uncorrectable Command: Not Supported 00:17:24.524 Dataset Management Command: Supported 00:17:24.524 Write Zeroes Command: Supported 00:17:24.524 Set Features Save Field: Not Supported 00:17:24.524 Reservations: Not Supported 00:17:24.524 Timestamp: Not Supported 00:17:24.524 Copy: Supported 00:17:24.524 Volatile Write Cache: Present 00:17:24.524 Atomic Write Unit (Normal): 1 00:17:24.524 Atomic Write Unit (PFail): 1 00:17:24.524 Atomic Compare & Write Unit: 1 00:17:24.524 Fused Compare & Write: Supported 00:17:24.524 Scatter-Gather List 00:17:24.524 SGL Command Set: Supported (Dword aligned) 00:17:24.524 SGL Keyed: Not Supported 00:17:24.524 SGL Bit Bucket Descriptor: Not Supported 00:17:24.524 SGL Metadata Pointer: Not Supported 00:17:24.524 Oversized SGL: Not Supported 00:17:24.524 SGL Metadata Address: Not Supported 00:17:24.524 SGL Offset: Not Supported 00:17:24.524 Transport SGL Data Block: Not Supported 00:17:24.524 Replay Protected Memory Block: Not Supported 00:17:24.524 00:17:24.524 Firmware Slot Information 00:17:24.524 ========================= 00:17:24.524 Active slot: 1 00:17:24.524 Slot 1 Firmware Revision: 25.01 00:17:24.524 00:17:24.524 00:17:24.524 Commands Supported and Effects 00:17:24.524 ============================== 00:17:24.524 Admin Commands 00:17:24.524 -------------- 00:17:24.524 Get Log Page (02h): Supported 00:17:24.524 Identify (06h): Supported 00:17:24.524 Abort (08h): Supported 00:17:24.524 Set Features (09h): Supported 00:17:24.524 Get Features (0Ah): Supported 00:17:24.524 Asynchronous Event Request (0Ch): Supported 00:17:24.524 Keep Alive (18h): Supported 00:17:24.524 I/O Commands 00:17:24.524 ------------ 00:17:24.524 Flush (00h): Supported LBA-Change 00:17:24.524 Write (01h): Supported LBA-Change 00:17:24.524 Read (02h): Supported 00:17:24.524 Compare (05h): Supported 00:17:24.524 Write Zeroes (08h): Supported LBA-Change 00:17:24.524 Dataset Management (09h): Supported LBA-Change 00:17:24.524 Copy (19h): Supported LBA-Change 00:17:24.524 00:17:24.524 Error Log 00:17:24.524 ========= 00:17:24.524 00:17:24.524 Arbitration 00:17:24.524 =========== 00:17:24.524 Arbitration Burst: 1 00:17:24.524 00:17:24.524 Power Management 00:17:24.524 ================ 00:17:24.524 Number of Power States: 1 00:17:24.524 Current Power State: Power State #0 00:17:24.524 Power State #0: 00:17:24.524 Max Power: 0.00 W 00:17:24.525 Non-Operational State: Operational 00:17:24.525 Entry Latency: Not Reported 00:17:24.525 Exit Latency: Not Reported 00:17:24.525 Relative Read Throughput: 0 00:17:24.525 Relative Read Latency: 0 00:17:24.525 Relative Write Throughput: 0 00:17:24.525 Relative Write Latency: 0 00:17:24.525 Idle Power: Not Reported 00:17:24.525 Active Power: Not Reported 00:17:24.525 Non-Operational Permissive Mode: Not Supported 00:17:24.525 00:17:24.525 Health Information 00:17:24.525 ================== 00:17:24.525 Critical Warnings: 00:17:24.525 Available Spare Space: OK 00:17:24.525 Temperature: OK 00:17:24.525 Device Reliability: OK 00:17:24.525 Read Only: No 00:17:24.525 Volatile Memory Backup: OK 00:17:24.525 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:24.525 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:24.525 Available Spare: 0% 00:17:24.525 Available Sp[2024-11-02 11:29:24.707180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:24.525 [2024-11-02 11:29:24.707196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:24.525 [2024-11-02 11:29:24.707280] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:24.525 [2024-11-02 11:29:24.707302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.525 [2024-11-02 11:29:24.707313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.525 [2024-11-02 11:29:24.707323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.525 [2024-11-02 11:29:24.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.525 [2024-11-02 11:29:24.710267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:24.525 [2024-11-02 11:29:24.710299] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:24.525 [2024-11-02 11:29:24.710838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:24.525 [2024-11-02 11:29:24.710922] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:24.525 [2024-11-02 11:29:24.710935] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:24.525 [2024-11-02 11:29:24.711854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:24.525 [2024-11-02 11:29:24.711876] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:24.525 [2024-11-02 11:29:24.711928] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:24.525 [2024-11-02 11:29:24.713893] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:24.525 are Threshold: 0% 00:17:24.525 Life Percentage Used: 0% 00:17:24.525 Data Units Read: 0 00:17:24.525 Data Units Written: 0 00:17:24.525 Host Read Commands: 0 00:17:24.525 Host Write Commands: 0 00:17:24.525 Controller Busy Time: 0 minutes 00:17:24.525 Power Cycles: 0 00:17:24.525 Power On Hours: 0 hours 00:17:24.525 Unsafe Shutdowns: 0 00:17:24.525 Unrecoverable Media Errors: 0 00:17:24.525 Lifetime Error Log Entries: 0 00:17:24.525 Warning Temperature Time: 0 minutes 00:17:24.525 Critical Temperature Time: 0 minutes 00:17:24.525 00:17:24.525 Number of Queues 00:17:24.525 ================ 00:17:24.525 Number of I/O Submission Queues: 127 00:17:24.525 Number of I/O Completion Queues: 127 00:17:24.525 00:17:24.525 Active Namespaces 00:17:24.525 ================= 00:17:24.525 Namespace ID:1 00:17:24.525 Error Recovery Timeout: Unlimited 00:17:24.525 Command Set Identifier: NVM (00h) 00:17:24.525 Deallocate: Supported 00:17:24.525 Deallocated/Unwritten Error: Not Supported 00:17:24.525 Deallocated Read Value: Unknown 00:17:24.525 Deallocate in Write Zeroes: Not Supported 00:17:24.525 Deallocated Guard Field: 0xFFFF 00:17:24.525 Flush: Supported 00:17:24.525 Reservation: Supported 00:17:24.525 Namespace Sharing Capabilities: Multiple Controllers 00:17:24.525 Size (in LBAs): 131072 (0GiB) 00:17:24.525 Capacity (in LBAs): 131072 (0GiB) 00:17:24.525 Utilization (in LBAs): 131072 (0GiB) 00:17:24.525 NGUID: E96D8518C83B46DB82F71A199127197E 00:17:24.525 UUID: e96d8518-c83b-46db-82f7-1a199127197e 00:17:24.525 Thin Provisioning: Not Supported 00:17:24.525 Per-NS Atomic Units: Yes 00:17:24.525 Atomic Boundary Size (Normal): 0 00:17:24.525 Atomic Boundary Size (PFail): 0 00:17:24.525 Atomic Boundary Offset: 0 00:17:24.525 Maximum Single Source Range Length: 65535 00:17:24.525 Maximum Copy Length: 65535 00:17:24.525 Maximum Source Range Count: 1 00:17:24.525 NGUID/EUI64 Never Reused: No 00:17:24.525 Namespace Write Protected: No 00:17:24.525 Number of LBA Formats: 1 00:17:24.525 Current LBA Format: LBA Format #00 00:17:24.525 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:24.525 00:17:24.525 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:24.783 [2024-11-02 11:29:24.956106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:30.132 Initializing NVMe Controllers 00:17:30.132 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:30.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:30.132 Initialization complete. Launching workers. 00:17:30.132 ======================================================== 00:17:30.132 Latency(us) 00:17:30.132 Device Information : IOPS MiB/s Average min max 00:17:30.132 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32581.77 127.27 3927.98 1199.36 8759.35 00:17:30.132 ======================================================== 00:17:30.132 Total : 32581.77 127.27 3927.98 1199.36 8759.35 00:17:30.132 00:17:30.132 [2024-11-02 11:29:29.974609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:30.132 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:30.132 [2024-11-02 11:29:30.240866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:35.393 Initializing NVMe Controllers 00:17:35.393 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:35.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:35.393 Initialization complete. Launching workers. 00:17:35.393 ======================================================== 00:17:35.393 Latency(us) 00:17:35.393 Device Information : IOPS MiB/s Average min max 00:17:35.393 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16101.43 62.90 7960.42 4969.47 9015.29 00:17:35.393 ======================================================== 00:17:35.393 Total : 16101.43 62.90 7960.42 4969.47 9015.29 00:17:35.393 00:17:35.393 [2024-11-02 11:29:35.277768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:35.393 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:35.393 [2024-11-02 11:29:35.501811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.676 [2024-11-02 11:29:40.580649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.676 Initializing NVMe Controllers 00:17:40.676 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:40.676 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:40.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:40.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:40.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:40.676 Initialization complete. Launching workers. 00:17:40.676 Starting thread on core 2 00:17:40.676 Starting thread on core 3 00:17:40.677 Starting thread on core 1 00:17:40.677 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:40.677 [2024-11-02 11:29:40.907699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:43.963 [2024-11-02 11:29:43.968641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:43.963 Initializing NVMe Controllers 00:17:43.963 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:43.963 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:43.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:43.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:43.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:43.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:43.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:43.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:43.963 Initialization complete. Launching workers. 00:17:43.963 Starting thread on core 1 with urgent priority queue 00:17:43.963 Starting thread on core 2 with urgent priority queue 00:17:43.963 Starting thread on core 3 with urgent priority queue 00:17:43.963 Starting thread on core 0 with urgent priority queue 00:17:43.963 SPDK bdev Controller (SPDK1 ) core 0: 6475.00 IO/s 15.44 secs/100000 ios 00:17:43.963 SPDK bdev Controller (SPDK1 ) core 1: 5835.00 IO/s 17.14 secs/100000 ios 00:17:43.963 SPDK bdev Controller (SPDK1 ) core 2: 6285.33 IO/s 15.91 secs/100000 ios 00:17:43.963 SPDK bdev Controller (SPDK1 ) core 3: 6335.33 IO/s 15.78 secs/100000 ios 00:17:43.963 ======================================================== 00:17:43.963 00:17:43.963 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:43.963 [2024-11-02 11:29:44.283738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:43.963 Initializing NVMe Controllers 00:17:43.963 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:43.963 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:43.963 Namespace ID: 1 size: 0GB 00:17:43.963 Initialization complete. 00:17:43.963 INFO: using host memory buffer for IO 00:17:43.963 Hello world! 00:17:43.963 [2024-11-02 11:29:44.317308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:44.220 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:44.479 [2024-11-02 11:29:44.635738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:45.411 Initializing NVMe Controllers 00:17:45.411 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.411 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:45.411 Initialization complete. Launching workers. 00:17:45.411 submit (in ns) avg, min, max = 7033.1, 3488.9, 4001483.3 00:17:45.411 complete (in ns) avg, min, max = 26596.6, 2067.8, 4013792.2 00:17:45.411 00:17:45.411 Submit histogram 00:17:45.411 ================ 00:17:45.411 Range in us Cumulative Count 00:17:45.411 3.484 - 3.508: 0.1003% ( 13) 00:17:45.411 3.508 - 3.532: 0.4860% ( 50) 00:17:45.411 3.532 - 3.556: 1.8979% ( 183) 00:17:45.411 3.556 - 3.579: 4.7755% ( 373) 00:17:45.411 3.579 - 3.603: 11.0940% ( 819) 00:17:45.411 3.603 - 3.627: 19.8195% ( 1131) 00:17:45.411 3.627 - 3.650: 30.3811% ( 1369) 00:17:45.411 3.650 - 3.674: 39.8318% ( 1225) 00:17:45.411 3.674 - 3.698: 47.0143% ( 931) 00:17:45.411 3.698 - 3.721: 53.7571% ( 874) 00:17:45.411 3.721 - 3.745: 58.8952% ( 666) 00:17:45.411 3.745 - 3.769: 64.1568% ( 682) 00:17:45.411 3.769 - 3.793: 68.0759% ( 508) 00:17:45.411 3.793 - 3.816: 71.3547% ( 425) 00:17:45.411 3.816 - 3.840: 74.1784% ( 366) 00:17:45.411 3.840 - 3.864: 77.6886% ( 455) 00:17:45.411 3.864 - 3.887: 81.2992% ( 468) 00:17:45.411 3.887 - 3.911: 84.2000% ( 376) 00:17:45.411 3.911 - 3.935: 86.6996% ( 324) 00:17:45.411 3.935 - 3.959: 88.5974% ( 246) 00:17:45.412 3.959 - 3.982: 90.5493% ( 253) 00:17:45.412 3.982 - 4.006: 92.1694% ( 210) 00:17:45.412 4.006 - 4.030: 93.4501% ( 166) 00:17:45.412 4.030 - 4.053: 94.3990% ( 123) 00:17:45.412 4.053 - 4.077: 95.2014% ( 104) 00:17:45.412 4.077 - 4.101: 95.7260% ( 68) 00:17:45.412 4.101 - 4.124: 95.9266% ( 26) 00:17:45.412 4.124 - 4.148: 96.1194% ( 25) 00:17:45.412 4.148 - 4.172: 96.2043% ( 11) 00:17:45.412 4.172 - 4.196: 96.2969% ( 12) 00:17:45.412 4.196 - 4.219: 96.4126% ( 15) 00:17:45.412 4.219 - 4.243: 96.4975% ( 11) 00:17:45.412 4.243 - 4.267: 96.5669% ( 9) 00:17:45.412 4.267 - 4.290: 96.6749% ( 14) 00:17:45.412 4.290 - 4.314: 96.7443% ( 9) 00:17:45.412 4.314 - 4.338: 96.7829% ( 5) 00:17:45.412 4.338 - 4.361: 96.8138% ( 4) 00:17:45.412 4.361 - 4.385: 96.8215% ( 1) 00:17:45.412 4.385 - 4.409: 96.8446% ( 3) 00:17:45.412 4.409 - 4.433: 96.8909% ( 6) 00:17:45.412 4.456 - 4.480: 96.9063% ( 2) 00:17:45.412 4.480 - 4.504: 96.9141% ( 1) 00:17:45.412 4.504 - 4.527: 96.9372% ( 3) 00:17:45.412 4.527 - 4.551: 96.9449% ( 1) 00:17:45.412 4.551 - 4.575: 96.9681% ( 3) 00:17:45.412 4.575 - 4.599: 97.0143% ( 6) 00:17:45.412 4.599 - 4.622: 97.0606% ( 6) 00:17:45.412 4.622 - 4.646: 97.1069% ( 6) 00:17:45.412 4.646 - 4.670: 97.1764% ( 9) 00:17:45.412 4.670 - 4.693: 97.2381% ( 8) 00:17:45.412 4.693 - 4.717: 97.2998% ( 8) 00:17:45.412 4.717 - 4.741: 97.3461% ( 6) 00:17:45.412 4.741 - 4.764: 97.4001% ( 7) 00:17:45.412 4.764 - 4.788: 97.4078% ( 1) 00:17:45.412 4.788 - 4.812: 97.4541% ( 6) 00:17:45.412 4.812 - 4.836: 97.5158% ( 8) 00:17:45.412 4.836 - 4.859: 97.5621% ( 6) 00:17:45.412 4.859 - 4.883: 97.5698% ( 1) 00:17:45.412 4.883 - 4.907: 97.6315% ( 8) 00:17:45.412 4.907 - 4.930: 97.6624% ( 4) 00:17:45.412 4.930 - 4.954: 97.7164% ( 7) 00:17:45.412 4.954 - 4.978: 97.7550% ( 5) 00:17:45.412 4.978 - 5.001: 97.8013% ( 6) 00:17:45.412 5.001 - 5.025: 97.8553% ( 7) 00:17:45.412 5.025 - 5.049: 97.8630% ( 1) 00:17:45.412 5.049 - 5.073: 97.8938% ( 4) 00:17:45.412 5.073 - 5.096: 97.9324% ( 5) 00:17:45.412 5.096 - 5.120: 97.9478% ( 2) 00:17:45.412 5.120 - 5.144: 97.9556% ( 1) 00:17:45.412 5.144 - 5.167: 97.9633% ( 1) 00:17:45.412 5.167 - 5.191: 97.9864% ( 3) 00:17:45.412 5.191 - 5.215: 98.0019% ( 2) 00:17:45.412 5.215 - 5.239: 98.0096% ( 1) 00:17:45.412 5.239 - 5.262: 98.0327% ( 3) 00:17:45.412 5.262 - 5.286: 98.0404% ( 1) 00:17:45.412 5.286 - 5.310: 98.0481% ( 1) 00:17:45.412 5.333 - 5.357: 98.0559% ( 1) 00:17:45.412 5.404 - 5.428: 98.0636% ( 1) 00:17:45.412 5.452 - 5.476: 98.0713% ( 1) 00:17:45.412 5.523 - 5.547: 98.0790% ( 1) 00:17:45.412 5.594 - 5.618: 98.0867% ( 1) 00:17:45.412 5.618 - 5.641: 98.1021% ( 2) 00:17:45.412 5.641 - 5.665: 98.1176% ( 2) 00:17:45.412 5.713 - 5.736: 98.1253% ( 1) 00:17:45.412 5.760 - 5.784: 98.1330% ( 1) 00:17:45.412 5.784 - 5.807: 98.1407% ( 1) 00:17:45.412 5.855 - 5.879: 98.1484% ( 1) 00:17:45.412 5.879 - 5.902: 98.1561% ( 1) 00:17:45.412 5.902 - 5.926: 98.1716% ( 2) 00:17:45.412 6.116 - 6.163: 98.1793% ( 1) 00:17:45.412 6.210 - 6.258: 98.1870% ( 1) 00:17:45.412 6.447 - 6.495: 98.1947% ( 1) 00:17:45.412 6.590 - 6.637: 98.2024% ( 1) 00:17:45.412 6.732 - 6.779: 98.2179% ( 2) 00:17:45.412 6.779 - 6.827: 98.2256% ( 1) 00:17:45.412 6.827 - 6.874: 98.2333% ( 1) 00:17:45.412 6.874 - 6.921: 98.2410% ( 1) 00:17:45.412 6.921 - 6.969: 98.2564% ( 2) 00:17:45.412 7.016 - 7.064: 98.2642% ( 1) 00:17:45.412 7.253 - 7.301: 98.2719% ( 1) 00:17:45.412 7.301 - 7.348: 98.2796% ( 1) 00:17:45.412 7.348 - 7.396: 98.2873% ( 1) 00:17:45.412 7.396 - 7.443: 98.3104% ( 3) 00:17:45.412 7.443 - 7.490: 98.3182% ( 1) 00:17:45.412 7.538 - 7.585: 98.3413% ( 3) 00:17:45.412 7.585 - 7.633: 98.3644% ( 3) 00:17:45.412 7.633 - 7.680: 98.3722% ( 1) 00:17:45.412 7.727 - 7.775: 98.3876% ( 2) 00:17:45.412 7.775 - 7.822: 98.4030% ( 2) 00:17:45.412 7.822 - 7.870: 98.4185% ( 2) 00:17:45.412 7.870 - 7.917: 98.4262% ( 1) 00:17:45.412 7.917 - 7.964: 98.4339% ( 1) 00:17:45.412 7.964 - 8.012: 98.4570% ( 3) 00:17:45.412 8.012 - 8.059: 98.4802% ( 3) 00:17:45.412 8.154 - 8.201: 98.4879% ( 1) 00:17:45.412 8.249 - 8.296: 98.4956% ( 1) 00:17:45.412 8.296 - 8.344: 98.5110% ( 2) 00:17:45.412 8.344 - 8.391: 98.5265% ( 2) 00:17:45.412 8.391 - 8.439: 98.5342% ( 1) 00:17:45.412 8.439 - 8.486: 98.5419% ( 1) 00:17:45.412 8.486 - 8.533: 98.5496% ( 1) 00:17:45.412 8.628 - 8.676: 98.5573% ( 1) 00:17:45.412 9.007 - 9.055: 98.5728% ( 2) 00:17:45.412 9.387 - 9.434: 98.5805% ( 1) 00:17:45.412 9.434 - 9.481: 98.5882% ( 1) 00:17:45.412 9.529 - 9.576: 98.6036% ( 2) 00:17:45.412 9.671 - 9.719: 98.6113% ( 1) 00:17:45.412 9.908 - 9.956: 98.6190% ( 1) 00:17:45.412 10.050 - 10.098: 98.6268% ( 1) 00:17:45.412 10.240 - 10.287: 98.6422% ( 2) 00:17:45.412 10.287 - 10.335: 98.6499% ( 1) 00:17:45.412 10.335 - 10.382: 98.6576% ( 1) 00:17:45.412 10.430 - 10.477: 98.6653% ( 1) 00:17:45.412 10.477 - 10.524: 98.6730% ( 1) 00:17:45.412 10.809 - 10.856: 98.6808% ( 1) 00:17:45.412 11.141 - 11.188: 98.7039% ( 3) 00:17:45.412 11.425 - 11.473: 98.7116% ( 1) 00:17:45.412 11.710 - 11.757: 98.7193% ( 1) 00:17:45.412 12.231 - 12.326: 98.7270% ( 1) 00:17:45.412 12.326 - 12.421: 98.7502% ( 3) 00:17:45.412 12.421 - 12.516: 98.7656% ( 2) 00:17:45.412 12.610 - 12.705: 98.7811% ( 2) 00:17:45.412 12.705 - 12.800: 98.7965% ( 2) 00:17:45.412 12.895 - 12.990: 98.8119% ( 2) 00:17:45.412 12.990 - 13.084: 98.8196% ( 1) 00:17:45.412 13.179 - 13.274: 98.8273% ( 1) 00:17:45.412 13.369 - 13.464: 98.8351% ( 1) 00:17:45.412 13.748 - 13.843: 98.8505% ( 2) 00:17:45.412 13.938 - 14.033: 98.8582% ( 1) 00:17:45.412 14.222 - 14.317: 98.8659% ( 1) 00:17:45.412 14.601 - 14.696: 98.8736% ( 1) 00:17:45.412 14.696 - 14.791: 98.8813% ( 1) 00:17:45.412 15.360 - 15.455: 98.8968% ( 2) 00:17:45.412 16.308 - 16.403: 98.9045% ( 1) 00:17:45.412 17.161 - 17.256: 98.9122% ( 1) 00:17:45.412 17.256 - 17.351: 98.9353% ( 3) 00:17:45.412 17.351 - 17.446: 98.9971% ( 8) 00:17:45.412 17.446 - 17.541: 99.0279% ( 4) 00:17:45.412 17.541 - 17.636: 99.1205% ( 12) 00:17:45.412 17.636 - 17.730: 99.2054% ( 11) 00:17:45.412 17.730 - 17.825: 99.2131% ( 1) 00:17:45.412 17.825 - 17.920: 99.2671% ( 7) 00:17:45.412 17.920 - 18.015: 99.2979% ( 4) 00:17:45.412 18.015 - 18.110: 99.3520% ( 7) 00:17:45.412 18.110 - 18.204: 99.4291% ( 10) 00:17:45.412 18.204 - 18.299: 99.4908% ( 8) 00:17:45.412 18.299 - 18.394: 99.5911% ( 13) 00:17:45.412 18.394 - 18.489: 99.6451% ( 7) 00:17:45.412 18.489 - 18.584: 99.6605% ( 2) 00:17:45.412 18.584 - 18.679: 99.7068% ( 6) 00:17:45.412 18.679 - 18.773: 99.7146% ( 1) 00:17:45.412 18.773 - 18.868: 99.7454% ( 4) 00:17:45.412 18.868 - 18.963: 99.7840% ( 5) 00:17:45.412 18.963 - 19.058: 99.7994% ( 2) 00:17:45.412 19.058 - 19.153: 99.8226% ( 3) 00:17:45.412 19.153 - 19.247: 99.8303% ( 1) 00:17:45.412 19.342 - 19.437: 99.8380% ( 1) 00:17:45.412 19.437 - 19.532: 99.8457% ( 1) 00:17:45.412 20.670 - 20.764: 99.8611% ( 2) 00:17:45.412 21.428 - 21.523: 99.8688% ( 1) 00:17:45.412 21.997 - 22.092: 99.8766% ( 1) 00:17:45.412 23.324 - 23.419: 99.8843% ( 1) 00:17:45.412 24.178 - 24.273: 99.8920% ( 1) 00:17:45.412 24.462 - 24.652: 99.8997% ( 1) 00:17:45.412 25.410 - 25.600: 99.9074% ( 1) 00:17:45.412 25.790 - 25.979: 99.9151% ( 1) 00:17:45.412 28.065 - 28.255: 99.9229% ( 1) 00:17:45.412 3980.705 - 4004.978: 100.0000% ( 10) 00:17:45.412 00:17:45.412 Complete histogram 00:17:45.413 ================== 00:17:45.413 Range in us Cumulative Count 00:17:45.413 2.062 - 2.074: 0.3549% ( 46) 00:17:45.413 2.074 - 2.086: 21.0461% ( 2682) 00:17:45.413 2.086 - 2.098: 28.6298% ( 983) 00:17:45.413 2.098 - 2.110: 34.1614% ( 717) 00:17:45.413 2.110 - 2.121: 57.4757% ( 3022) 00:17:45.413 2.121 - 2.133: 60.6696% ( 414) 00:17:45.413 2.133 - 2.145: 64.5657% ( 505) 00:17:45.413 2.145 - 2.157: 73.9778% ( 1220) 00:17:45.413 2.157 - 2.169: 75.6365% ( 215) 00:17:45.413 2.169 - 2.181: 80.0185% ( 568) 00:17:45.413 2.181 - 2.193: 87.2859% ( 942) 00:17:45.413 2.193 - 2.204: 88.3737% ( 141) 00:17:45.413 2.204 - 2.216: 89.2686% ( 116) 00:17:45.413 2.216 - 2.228: 90.6959% ( 185) 00:17:45.413 2.228 - 2.240: 92.2774% ( 205) 00:17:45.413 2.240 - 2.252: 93.4578% ( 153) 00:17:45.413 2.252 - 2.264: 94.1676% ( 92) 00:17:45.413 2.264 - 2.276: 94.5533% ( 50) 00:17:45.413 2.276 - 2.287: 94.7539% ( 26) 00:17:45.413 2.287 - 2.299: 95.0393% ( 37) 00:17:45.413 2.299 - 2.311: 95.3171% ( 36) 00:17:45.413 2.311 - 2.323: 95.5562% ( 31) 00:17:45.413 2.323 - 2.335: 95.6102% ( 7) 00:17:45.413 2.335 - 2.347: 95.6720% ( 8) 00:17:45.413 2.347 - 2.359: 95.7414% ( 9) 00:17:45.413 2.359 - 2.370: 95.9034% ( 21) 00:17:45.413 2.370 - 2.382: 96.0731% ( 22) 00:17:45.413 2.382 - 2.394: 96.4280% ( 46) 00:17:45.413 2.394 - 2.406: 96.6826% ( 33) 00:17:45.413 2.406 - 2.418: 96.8369% ( 20) 00:17:45.413 2.418 - 2.430: 97.0066% ( 22) 00:17:45.413 2.430 - 2.441: 97.1918% ( 24) 00:17:45.413 2.441 - 2.453: 97.3615% ( 22) 00:17:45.413 2.453 - 2.465: 97.5930% ( 30) 00:17:45.413 2.465 - 2.477: 97.7087% ( 15) 00:17:45.413 2.477 - 2.489: 97.7936% ( 11) 00:17:45.413 2.489 - 2.501: 97.8707% ( 10) 00:17:45.413 2.501 - 2.513: 97.9401% ( 9) 00:17:45.413 2.513 - 2.524: 98.0096% ( 9) 00:17:45.413 2.524 - 2.536: 98.0636% ( 7) 00:17:45.413 2.536 - 2.548: 98.0790% ( 2) 00:17:45.413 2.548 - 2.560: 98.0867% ( 1) 00:17:45.413 2.572 - 2.584: 98.0944% ( 1) 00:17:45.413 2.596 - 2.607: 98.1021% ( 1) 00:17:45.413 2.607 - 2.619: 98.1099% ( 1) 00:17:45.413 2.631 - 2.643: 98.1176% ( 1) 00:17:45.413 2.714 - 2.726: 98.1253% ( 1) 00:17:45.413 2.750 - 2.761: 98.1330% ( 1) 00:17:45.413 2.761 - 2.773: 98.1407% ( 1) 00:17:45.413 2.785 - 2.797: 98.1484% ( 1) 00:17:45.413 2.797 - 2.809: 98.1561% ( 1) 00:17:45.413 2.856 - 2.868: 98.1639% ( 1) 00:17:45.413 2.892 - 2.904: 98.1716% ( 1) 00:17:45.413 2.904 - 2.916: 98.1793% ( 1) 00:17:45.413 2.916 - 2.927: 98.1870% ( 1) 00:17:45.413 2.951 - 2.963: 98.1947% ( 1) 00:17:45.413 2.963 - 2.975: 98.2024% ( 1) 00:17:45.413 2.999 - 3.010: 98.2102% ( 1) 00:17:45.413 3.022 - 3.034: 98.2179% ( 1) 00:17:45.413 3.034 - 3.058: 98.2410% ( 3) 00:17:45.413 3.058 - 3.081: 98.2564% ( 2) 00:17:45.413 3.081 - 3.105: 98.2642% ( 1) 00:17:45.413 3.105 - 3.129: 98.2796% ( 2) 00:17:45.413 3.129 - 3.153: 9[2024-11-02 11:29:45.657795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:45.413 8.2873% ( 1) 00:17:45.413 3.200 - 3.224: 98.2950% ( 1) 00:17:45.413 3.224 - 3.247: 98.3182% ( 3) 00:17:45.413 3.247 - 3.271: 98.3259% ( 1) 00:17:45.413 3.295 - 3.319: 98.3413% ( 2) 00:17:45.413 3.366 - 3.390: 98.3490% ( 1) 00:17:45.413 3.413 - 3.437: 98.3644% ( 2) 00:17:45.413 3.461 - 3.484: 98.3722% ( 1) 00:17:45.413 3.484 - 3.508: 98.3799% ( 1) 00:17:45.413 3.508 - 3.532: 98.3953% ( 2) 00:17:45.413 3.532 - 3.556: 98.4262% ( 4) 00:17:45.413 3.556 - 3.579: 98.4339% ( 1) 00:17:45.413 3.579 - 3.603: 98.4570% ( 3) 00:17:45.413 3.603 - 3.627: 98.4647% ( 1) 00:17:45.413 3.627 - 3.650: 98.4802% ( 2) 00:17:45.413 3.698 - 3.721: 98.4879% ( 1) 00:17:45.413 3.721 - 3.745: 98.4956% ( 1) 00:17:45.413 3.745 - 3.769: 98.5033% ( 1) 00:17:45.413 3.769 - 3.793: 98.5342% ( 4) 00:17:45.413 3.793 - 3.816: 98.5573% ( 3) 00:17:45.413 3.816 - 3.840: 98.5650% ( 1) 00:17:45.413 3.840 - 3.864: 98.5805% ( 2) 00:17:45.413 3.864 - 3.887: 98.5959% ( 2) 00:17:45.413 3.887 - 3.911: 98.6113% ( 2) 00:17:45.413 3.935 - 3.959: 98.6190% ( 1) 00:17:45.413 3.959 - 3.982: 98.6268% ( 1) 00:17:45.413 4.053 - 4.077: 98.6422% ( 2) 00:17:45.413 4.101 - 4.124: 98.6576% ( 2) 00:17:45.413 4.267 - 4.290: 98.6653% ( 1) 00:17:45.413 4.338 - 4.361: 98.6730% ( 1) 00:17:45.413 5.096 - 5.120: 98.6808% ( 1) 00:17:45.413 5.167 - 5.191: 98.6885% ( 1) 00:17:45.413 5.357 - 5.381: 98.6962% ( 1) 00:17:45.413 5.404 - 5.428: 98.7039% ( 1) 00:17:45.413 5.547 - 5.570: 98.7193% ( 2) 00:17:45.413 5.570 - 5.594: 98.7270% ( 1) 00:17:45.413 5.689 - 5.713: 98.7425% ( 2) 00:17:45.413 5.760 - 5.784: 98.7502% ( 1) 00:17:45.413 5.807 - 5.831: 98.7656% ( 2) 00:17:45.413 5.855 - 5.879: 98.7733% ( 1) 00:17:45.413 5.950 - 5.973: 98.7811% ( 1) 00:17:45.413 5.997 - 6.021: 98.7888% ( 1) 00:17:45.413 6.044 - 6.068: 98.7965% ( 1) 00:17:45.413 6.210 - 6.258: 98.8042% ( 1) 00:17:45.413 6.258 - 6.305: 98.8119% ( 1) 00:17:45.413 6.590 - 6.637: 98.8196% ( 1) 00:17:45.413 6.732 - 6.779: 98.8273% ( 1) 00:17:45.413 6.827 - 6.874: 98.8351% ( 1) 00:17:45.413 7.206 - 7.253: 98.8428% ( 1) 00:17:45.413 7.775 - 7.822: 98.8505% ( 1) 00:17:45.413 8.107 - 8.154: 98.8582% ( 1) 00:17:45.413 8.581 - 8.628: 98.8659% ( 1) 00:17:45.413 8.676 - 8.723: 98.8736% ( 1) 00:17:45.413 9.481 - 9.529: 98.8813% ( 1) 00:17:45.413 9.671 - 9.719: 98.8891% ( 1) 00:17:45.413 15.360 - 15.455: 98.8968% ( 1) 00:17:45.413 15.550 - 15.644: 98.9045% ( 1) 00:17:45.413 15.644 - 15.739: 98.9276% ( 3) 00:17:45.413 15.739 - 15.834: 98.9508% ( 3) 00:17:45.413 15.834 - 15.929: 98.9662% ( 2) 00:17:45.413 15.929 - 16.024: 98.9971% ( 4) 00:17:45.413 16.024 - 16.119: 99.0125% ( 2) 00:17:45.413 16.119 - 16.213: 99.0356% ( 3) 00:17:45.413 16.213 - 16.308: 99.0819% ( 6) 00:17:45.413 16.308 - 16.403: 99.1128% ( 4) 00:17:45.413 16.403 - 16.498: 99.1437% ( 4) 00:17:45.413 16.498 - 16.593: 99.1822% ( 5) 00:17:45.413 16.593 - 16.687: 99.2131% ( 4) 00:17:45.413 16.687 - 16.782: 99.2285% ( 2) 00:17:45.413 16.782 - 16.877: 99.2439% ( 2) 00:17:45.413 16.877 - 16.972: 99.2748% ( 4) 00:17:45.413 16.972 - 17.067: 99.2902% ( 2) 00:17:45.413 17.067 - 17.161: 99.2979% ( 1) 00:17:45.413 17.161 - 17.256: 99.3057% ( 1) 00:17:45.413 17.256 - 17.351: 99.3211% ( 2) 00:17:45.413 17.351 - 17.446: 99.3442% ( 3) 00:17:45.413 17.541 - 17.636: 99.3520% ( 1) 00:17:45.413 17.636 - 17.730: 99.3597% ( 1) 00:17:45.413 18.299 - 18.394: 99.3751% ( 2) 00:17:45.413 32.806 - 32.996: 99.3828% ( 1) 00:17:45.413 34.702 - 34.892: 99.3905% ( 1) 00:17:45.413 3980.705 - 4004.978: 99.9614% ( 74) 00:17:45.413 4004.978 - 4029.250: 100.0000% ( 5) 00:17:45.413 00:17:45.413 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:45.413 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:45.413 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:45.413 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:45.413 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:45.671 [ 00:17:45.671 { 00:17:45.671 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.671 "subtype": "Discovery", 00:17:45.671 "listen_addresses": [], 00:17:45.671 "allow_any_host": true, 00:17:45.671 "hosts": [] 00:17:45.671 }, 00:17:45.671 { 00:17:45.671 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:45.671 "subtype": "NVMe", 00:17:45.671 "listen_addresses": [ 00:17:45.671 { 00:17:45.671 "trtype": "VFIOUSER", 00:17:45.671 "adrfam": "IPv4", 00:17:45.671 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:45.671 "trsvcid": "0" 00:17:45.671 } 00:17:45.671 ], 00:17:45.671 "allow_any_host": true, 00:17:45.671 "hosts": [], 00:17:45.671 "serial_number": "SPDK1", 00:17:45.671 "model_number": "SPDK bdev Controller", 00:17:45.671 "max_namespaces": 32, 00:17:45.671 "min_cntlid": 1, 00:17:45.671 "max_cntlid": 65519, 00:17:45.671 "namespaces": [ 00:17:45.671 { 00:17:45.671 "nsid": 1, 00:17:45.671 "bdev_name": "Malloc1", 00:17:45.671 "name": "Malloc1", 00:17:45.671 "nguid": "E96D8518C83B46DB82F71A199127197E", 00:17:45.671 "uuid": "e96d8518-c83b-46db-82f7-1a199127197e" 00:17:45.671 } 00:17:45.671 ] 00:17:45.671 }, 00:17:45.671 { 00:17:45.671 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:45.671 "subtype": "NVMe", 00:17:45.671 "listen_addresses": [ 00:17:45.671 { 00:17:45.671 "trtype": "VFIOUSER", 00:17:45.671 "adrfam": "IPv4", 00:17:45.671 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:45.671 "trsvcid": "0" 00:17:45.671 } 00:17:45.671 ], 00:17:45.672 "allow_any_host": true, 00:17:45.672 "hosts": [], 00:17:45.672 "serial_number": "SPDK2", 00:17:45.672 "model_number": "SPDK bdev Controller", 00:17:45.672 "max_namespaces": 32, 00:17:45.672 "min_cntlid": 1, 00:17:45.672 "max_cntlid": 65519, 00:17:45.672 "namespaces": [ 00:17:45.672 { 00:17:45.672 "nsid": 1, 00:17:45.672 "bdev_name": "Malloc2", 00:17:45.672 "name": "Malloc2", 00:17:45.672 "nguid": "36D3F05480264045A71E625DF7E74EC0", 00:17:45.672 "uuid": "36d3f054-8026-4045-a71e-625df7e74ec0" 00:17:45.672 } 00:17:45.672 ] 00:17:45.672 } 00:17:45.672 ] 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3811355 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:45.672 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:45.930 [2024-11-02 11:29:46.199768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:45.931 Malloc3 00:17:46.188 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:46.446 [2024-11-02 11:29:46.603757] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:46.446 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:46.446 Asynchronous Event Request test 00:17:46.446 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:46.446 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:46.446 Registering asynchronous event callbacks... 00:17:46.446 Starting namespace attribute notice tests for all controllers... 00:17:46.446 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:46.446 aer_cb - Changed Namespace 00:17:46.446 Cleaning up... 00:17:46.705 [ 00:17:46.705 { 00:17:46.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:46.705 "subtype": "Discovery", 00:17:46.705 "listen_addresses": [], 00:17:46.705 "allow_any_host": true, 00:17:46.705 "hosts": [] 00:17:46.705 }, 00:17:46.705 { 00:17:46.705 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:46.705 "subtype": "NVMe", 00:17:46.705 "listen_addresses": [ 00:17:46.705 { 00:17:46.705 "trtype": "VFIOUSER", 00:17:46.705 "adrfam": "IPv4", 00:17:46.705 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:46.705 "trsvcid": "0" 00:17:46.705 } 00:17:46.705 ], 00:17:46.705 "allow_any_host": true, 00:17:46.705 "hosts": [], 00:17:46.705 "serial_number": "SPDK1", 00:17:46.705 "model_number": "SPDK bdev Controller", 00:17:46.705 "max_namespaces": 32, 00:17:46.705 "min_cntlid": 1, 00:17:46.705 "max_cntlid": 65519, 00:17:46.705 "namespaces": [ 00:17:46.705 { 00:17:46.705 "nsid": 1, 00:17:46.705 "bdev_name": "Malloc1", 00:17:46.705 "name": "Malloc1", 00:17:46.705 "nguid": "E96D8518C83B46DB82F71A199127197E", 00:17:46.705 "uuid": "e96d8518-c83b-46db-82f7-1a199127197e" 00:17:46.705 }, 00:17:46.705 { 00:17:46.705 "nsid": 2, 00:17:46.705 "bdev_name": "Malloc3", 00:17:46.705 "name": "Malloc3", 00:17:46.705 "nguid": "463136F23BB54DFCACF6FDA0892D6936", 00:17:46.705 "uuid": "463136f2-3bb5-4dfc-acf6-fda0892d6936" 00:17:46.705 } 00:17:46.705 ] 00:17:46.705 }, 00:17:46.705 { 00:17:46.705 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:46.705 "subtype": "NVMe", 00:17:46.705 "listen_addresses": [ 00:17:46.705 { 00:17:46.705 "trtype": "VFIOUSER", 00:17:46.705 "adrfam": "IPv4", 00:17:46.705 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:46.705 "trsvcid": "0" 00:17:46.705 } 00:17:46.705 ], 00:17:46.705 "allow_any_host": true, 00:17:46.705 "hosts": [], 00:17:46.705 "serial_number": "SPDK2", 00:17:46.705 "model_number": "SPDK bdev Controller", 00:17:46.705 "max_namespaces": 32, 00:17:46.705 "min_cntlid": 1, 00:17:46.705 "max_cntlid": 65519, 00:17:46.705 "namespaces": [ 00:17:46.705 { 00:17:46.705 "nsid": 1, 00:17:46.705 "bdev_name": "Malloc2", 00:17:46.705 "name": "Malloc2", 00:17:46.705 "nguid": "36D3F05480264045A71E625DF7E74EC0", 00:17:46.705 "uuid": "36d3f054-8026-4045-a71e-625df7e74ec0" 00:17:46.705 } 00:17:46.705 ] 00:17:46.705 } 00:17:46.705 ] 00:17:46.705 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3811355 00:17:46.705 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.705 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:46.705 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:46.705 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:46.705 [2024-11-02 11:29:46.909191] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:17:46.705 [2024-11-02 11:29:46.909232] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811478 ] 00:17:46.705 [2024-11-02 11:29:46.966800] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:46.705 [2024-11-02 11:29:46.973524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:46.705 [2024-11-02 11:29:46.973567] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbec6224000 00:17:46.705 [2024-11-02 11:29:46.974521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.975524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.976527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.977532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.978535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.979547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.980569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.981570] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:46.705 [2024-11-02 11:29:46.982581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:46.705 [2024-11-02 11:29:46.982603] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbec4f1c000 00:17:46.705 [2024-11-02 11:29:46.983757] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:46.705 [2024-11-02 11:29:47.000609] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:46.705 [2024-11-02 11:29:47.000645] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:46.706 [2024-11-02 11:29:47.002724] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:46.706 [2024-11-02 11:29:47.002776] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:46.706 [2024-11-02 11:29:47.002862] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:46.706 [2024-11-02 11:29:47.002886] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:46.706 [2024-11-02 11:29:47.002896] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:46.706 [2024-11-02 11:29:47.003727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:46.706 [2024-11-02 11:29:47.003747] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:46.706 [2024-11-02 11:29:47.003759] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:46.706 [2024-11-02 11:29:47.004727] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:46.706 [2024-11-02 11:29:47.004747] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:46.706 [2024-11-02 11:29:47.004760] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:46.706 [2024-11-02 11:29:47.005735] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:46.706 [2024-11-02 11:29:47.005755] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:46.706 [2024-11-02 11:29:47.006746] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:46.706 [2024-11-02 11:29:47.006766] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:46.706 [2024-11-02 11:29:47.006775] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:46.706 [2024-11-02 11:29:47.006787] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:46.706 [2024-11-02 11:29:47.006896] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:46.706 [2024-11-02 11:29:47.006904] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:46.706 [2024-11-02 11:29:47.006916] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:46.706 [2024-11-02 11:29:47.007750] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:46.706 [2024-11-02 11:29:47.008751] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:46.706 [2024-11-02 11:29:47.009757] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:46.706 [2024-11-02 11:29:47.010757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.706 [2024-11-02 11:29:47.010841] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:46.706 [2024-11-02 11:29:47.011777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:46.706 [2024-11-02 11:29:47.011796] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:46.706 [2024-11-02 11:29:47.011805] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.011828] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:46.706 [2024-11-02 11:29:47.011841] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.011860] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:46.706 [2024-11-02 11:29:47.011869] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:46.706 [2024-11-02 11:29:47.011875] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.706 [2024-11-02 11:29:47.011892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:46.706 [2024-11-02 11:29:47.018271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:46.706 [2024-11-02 11:29:47.018295] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:46.706 [2024-11-02 11:29:47.018303] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:46.706 [2024-11-02 11:29:47.018310] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:46.706 [2024-11-02 11:29:47.018324] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:46.706 [2024-11-02 11:29:47.018331] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:46.706 [2024-11-02 11:29:47.018339] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:46.706 [2024-11-02 11:29:47.018346] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.018358] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.018373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:46.706 [2024-11-02 11:29:47.026268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:46.706 [2024-11-02 11:29:47.026301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.706 [2024-11-02 11:29:47.026319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.706 [2024-11-02 11:29:47.026331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.706 [2024-11-02 11:29:47.026343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.706 [2024-11-02 11:29:47.026352] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.026363] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.026377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:46.706 [2024-11-02 11:29:47.034266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:46.706 [2024-11-02 11:29:47.034289] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:46.706 [2024-11-02 11:29:47.034299] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.034310] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.034320] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.034334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:46.706 [2024-11-02 11:29:47.042268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:46.706 [2024-11-02 11:29:47.042343] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.042359] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.042371] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:46.706 [2024-11-02 11:29:47.042379] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:46.706 [2024-11-02 11:29:47.042385] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.706 [2024-11-02 11:29:47.042395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:46.706 [2024-11-02 11:29:47.050266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:46.706 [2024-11-02 11:29:47.050289] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:46.706 [2024-11-02 11:29:47.050309] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.050324] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.050341] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:46.706 [2024-11-02 11:29:47.050350] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:46.706 [2024-11-02 11:29:47.050356] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.706 [2024-11-02 11:29:47.050366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:46.706 [2024-11-02 11:29:47.058268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:46.706 [2024-11-02 11:29:47.058298] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.058314] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:46.706 [2024-11-02 11:29:47.058327] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:46.706 [2024-11-02 11:29:47.058335] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:46.707 [2024-11-02 11:29:47.058340] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.707 [2024-11-02 11:29:47.058350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.066268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:46.707 [2024-11-02 11:29:47.066290] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066302] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066317] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066327] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066336] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066344] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066352] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:46.707 [2024-11-02 11:29:47.066360] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:46.707 [2024-11-02 11:29:47.066368] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:46.707 [2024-11-02 11:29:47.066393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.078268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:46.707 [2024-11-02 11:29:47.078296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.086269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:46.707 [2024-11-02 11:29:47.086293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.094269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:46.707 [2024-11-02 11:29:47.094293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.102273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:46.707 [2024-11-02 11:29:47.102308] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:46.707 [2024-11-02 11:29:47.102320] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:46.707 [2024-11-02 11:29:47.102326] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:46.707 [2024-11-02 11:29:47.102332] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:46.707 [2024-11-02 11:29:47.102338] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:46.707 [2024-11-02 11:29:47.102349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:46.707 [2024-11-02 11:29:47.102361] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:46.707 [2024-11-02 11:29:47.102369] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:46.707 [2024-11-02 11:29:47.102375] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.707 [2024-11-02 11:29:47.102388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.102399] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:46.707 [2024-11-02 11:29:47.102407] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:46.707 [2024-11-02 11:29:47.102413] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.707 [2024-11-02 11:29:47.102422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:46.707 [2024-11-02 11:29:47.102438] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:46.707 [2024-11-02 11:29:47.102447] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:46.707 [2024-11-02 11:29:47.102453] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:46.707 [2024-11-02 11:29:47.102462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:46.965 [2024-11-02 11:29:47.110270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:46.965 [2024-11-02 11:29:47.110300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:46.965 [2024-11-02 11:29:47.110323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:46.965 [2024-11-02 11:29:47.110362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:46.965 ===================================================== 00:17:46.965 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:46.965 ===================================================== 00:17:46.965 Controller Capabilities/Features 00:17:46.965 ================================ 00:17:46.965 Vendor ID: 4e58 00:17:46.965 Subsystem Vendor ID: 4e58 00:17:46.965 Serial Number: SPDK2 00:17:46.965 Model Number: SPDK bdev Controller 00:17:46.965 Firmware Version: 25.01 00:17:46.965 Recommended Arb Burst: 6 00:17:46.965 IEEE OUI Identifier: 8d 6b 50 00:17:46.965 Multi-path I/O 00:17:46.965 May have multiple subsystem ports: Yes 00:17:46.965 May have multiple controllers: Yes 00:17:46.965 Associated with SR-IOV VF: No 00:17:46.965 Max Data Transfer Size: 131072 00:17:46.965 Max Number of Namespaces: 32 00:17:46.965 Max Number of I/O Queues: 127 00:17:46.965 NVMe Specification Version (VS): 1.3 00:17:46.965 NVMe Specification Version (Identify): 1.3 00:17:46.965 Maximum Queue Entries: 256 00:17:46.965 Contiguous Queues Required: Yes 00:17:46.965 Arbitration Mechanisms Supported 00:17:46.965 Weighted Round Robin: Not Supported 00:17:46.965 Vendor Specific: Not Supported 00:17:46.965 Reset Timeout: 15000 ms 00:17:46.965 Doorbell Stride: 4 bytes 00:17:46.965 NVM Subsystem Reset: Not Supported 00:17:46.965 Command Sets Supported 00:17:46.965 NVM Command Set: Supported 00:17:46.965 Boot Partition: Not Supported 00:17:46.965 Memory Page Size Minimum: 4096 bytes 00:17:46.965 Memory Page Size Maximum: 4096 bytes 00:17:46.965 Persistent Memory Region: Not Supported 00:17:46.965 Optional Asynchronous Events Supported 00:17:46.965 Namespace Attribute Notices: Supported 00:17:46.965 Firmware Activation Notices: Not Supported 00:17:46.965 ANA Change Notices: Not Supported 00:17:46.965 PLE Aggregate Log Change Notices: Not Supported 00:17:46.965 LBA Status Info Alert Notices: Not Supported 00:17:46.965 EGE Aggregate Log Change Notices: Not Supported 00:17:46.965 Normal NVM Subsystem Shutdown event: Not Supported 00:17:46.965 Zone Descriptor Change Notices: Not Supported 00:17:46.965 Discovery Log Change Notices: Not Supported 00:17:46.965 Controller Attributes 00:17:46.965 128-bit Host Identifier: Supported 00:17:46.965 Non-Operational Permissive Mode: Not Supported 00:17:46.965 NVM Sets: Not Supported 00:17:46.965 Read Recovery Levels: Not Supported 00:17:46.965 Endurance Groups: Not Supported 00:17:46.966 Predictable Latency Mode: Not Supported 00:17:46.966 Traffic Based Keep ALive: Not Supported 00:17:46.966 Namespace Granularity: Not Supported 00:17:46.966 SQ Associations: Not Supported 00:17:46.966 UUID List: Not Supported 00:17:46.966 Multi-Domain Subsystem: Not Supported 00:17:46.966 Fixed Capacity Management: Not Supported 00:17:46.966 Variable Capacity Management: Not Supported 00:17:46.966 Delete Endurance Group: Not Supported 00:17:46.966 Delete NVM Set: Not Supported 00:17:46.966 Extended LBA Formats Supported: Not Supported 00:17:46.966 Flexible Data Placement Supported: Not Supported 00:17:46.966 00:17:46.966 Controller Memory Buffer Support 00:17:46.966 ================================ 00:17:46.966 Supported: No 00:17:46.966 00:17:46.966 Persistent Memory Region Support 00:17:46.966 ================================ 00:17:46.966 Supported: No 00:17:46.966 00:17:46.966 Admin Command Set Attributes 00:17:46.966 ============================ 00:17:46.966 Security Send/Receive: Not Supported 00:17:46.966 Format NVM: Not Supported 00:17:46.966 Firmware Activate/Download: Not Supported 00:17:46.966 Namespace Management: Not Supported 00:17:46.966 Device Self-Test: Not Supported 00:17:46.966 Directives: Not Supported 00:17:46.966 NVMe-MI: Not Supported 00:17:46.966 Virtualization Management: Not Supported 00:17:46.966 Doorbell Buffer Config: Not Supported 00:17:46.966 Get LBA Status Capability: Not Supported 00:17:46.966 Command & Feature Lockdown Capability: Not Supported 00:17:46.966 Abort Command Limit: 4 00:17:46.966 Async Event Request Limit: 4 00:17:46.966 Number of Firmware Slots: N/A 00:17:46.966 Firmware Slot 1 Read-Only: N/A 00:17:46.966 Firmware Activation Without Reset: N/A 00:17:46.966 Multiple Update Detection Support: N/A 00:17:46.966 Firmware Update Granularity: No Information Provided 00:17:46.966 Per-Namespace SMART Log: No 00:17:46.966 Asymmetric Namespace Access Log Page: Not Supported 00:17:46.966 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:46.966 Command Effects Log Page: Supported 00:17:46.966 Get Log Page Extended Data: Supported 00:17:46.966 Telemetry Log Pages: Not Supported 00:17:46.966 Persistent Event Log Pages: Not Supported 00:17:46.966 Supported Log Pages Log Page: May Support 00:17:46.966 Commands Supported & Effects Log Page: Not Supported 00:17:46.966 Feature Identifiers & Effects Log Page:May Support 00:17:46.966 NVMe-MI Commands & Effects Log Page: May Support 00:17:46.966 Data Area 4 for Telemetry Log: Not Supported 00:17:46.966 Error Log Page Entries Supported: 128 00:17:46.966 Keep Alive: Supported 00:17:46.966 Keep Alive Granularity: 10000 ms 00:17:46.966 00:17:46.966 NVM Command Set Attributes 00:17:46.966 ========================== 00:17:46.966 Submission Queue Entry Size 00:17:46.966 Max: 64 00:17:46.966 Min: 64 00:17:46.966 Completion Queue Entry Size 00:17:46.966 Max: 16 00:17:46.966 Min: 16 00:17:46.966 Number of Namespaces: 32 00:17:46.966 Compare Command: Supported 00:17:46.966 Write Uncorrectable Command: Not Supported 00:17:46.966 Dataset Management Command: Supported 00:17:46.966 Write Zeroes Command: Supported 00:17:46.966 Set Features Save Field: Not Supported 00:17:46.966 Reservations: Not Supported 00:17:46.966 Timestamp: Not Supported 00:17:46.966 Copy: Supported 00:17:46.966 Volatile Write Cache: Present 00:17:46.966 Atomic Write Unit (Normal): 1 00:17:46.966 Atomic Write Unit (PFail): 1 00:17:46.966 Atomic Compare & Write Unit: 1 00:17:46.966 Fused Compare & Write: Supported 00:17:46.966 Scatter-Gather List 00:17:46.966 SGL Command Set: Supported (Dword aligned) 00:17:46.966 SGL Keyed: Not Supported 00:17:46.966 SGL Bit Bucket Descriptor: Not Supported 00:17:46.966 SGL Metadata Pointer: Not Supported 00:17:46.966 Oversized SGL: Not Supported 00:17:46.966 SGL Metadata Address: Not Supported 00:17:46.966 SGL Offset: Not Supported 00:17:46.966 Transport SGL Data Block: Not Supported 00:17:46.966 Replay Protected Memory Block: Not Supported 00:17:46.966 00:17:46.966 Firmware Slot Information 00:17:46.966 ========================= 00:17:46.966 Active slot: 1 00:17:46.966 Slot 1 Firmware Revision: 25.01 00:17:46.966 00:17:46.966 00:17:46.966 Commands Supported and Effects 00:17:46.966 ============================== 00:17:46.966 Admin Commands 00:17:46.966 -------------- 00:17:46.966 Get Log Page (02h): Supported 00:17:46.966 Identify (06h): Supported 00:17:46.966 Abort (08h): Supported 00:17:46.966 Set Features (09h): Supported 00:17:46.966 Get Features (0Ah): Supported 00:17:46.966 Asynchronous Event Request (0Ch): Supported 00:17:46.966 Keep Alive (18h): Supported 00:17:46.966 I/O Commands 00:17:46.966 ------------ 00:17:46.966 Flush (00h): Supported LBA-Change 00:17:46.966 Write (01h): Supported LBA-Change 00:17:46.966 Read (02h): Supported 00:17:46.966 Compare (05h): Supported 00:17:46.966 Write Zeroes (08h): Supported LBA-Change 00:17:46.966 Dataset Management (09h): Supported LBA-Change 00:17:46.966 Copy (19h): Supported LBA-Change 00:17:46.966 00:17:46.966 Error Log 00:17:46.966 ========= 00:17:46.966 00:17:46.966 Arbitration 00:17:46.966 =========== 00:17:46.966 Arbitration Burst: 1 00:17:46.966 00:17:46.966 Power Management 00:17:46.966 ================ 00:17:46.966 Number of Power States: 1 00:17:46.966 Current Power State: Power State #0 00:17:46.966 Power State #0: 00:17:46.966 Max Power: 0.00 W 00:17:46.966 Non-Operational State: Operational 00:17:46.966 Entry Latency: Not Reported 00:17:46.966 Exit Latency: Not Reported 00:17:46.966 Relative Read Throughput: 0 00:17:46.966 Relative Read Latency: 0 00:17:46.966 Relative Write Throughput: 0 00:17:46.966 Relative Write Latency: 0 00:17:46.966 Idle Power: Not Reported 00:17:46.966 Active Power: Not Reported 00:17:46.966 Non-Operational Permissive Mode: Not Supported 00:17:46.966 00:17:46.966 Health Information 00:17:46.966 ================== 00:17:46.966 Critical Warnings: 00:17:46.966 Available Spare Space: OK 00:17:46.966 Temperature: OK 00:17:46.966 Device Reliability: OK 00:17:46.966 Read Only: No 00:17:46.966 Volatile Memory Backup: OK 00:17:46.966 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:46.966 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:46.966 Available Spare: 0% 00:17:46.966 Available Sp[2024-11-02 11:29:47.110498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:46.966 [2024-11-02 11:29:47.118282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:46.966 [2024-11-02 11:29:47.118333] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:46.966 [2024-11-02 11:29:47.118350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.966 [2024-11-02 11:29:47.118365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.966 [2024-11-02 11:29:47.118375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.966 [2024-11-02 11:29:47.118384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.966 [2024-11-02 11:29:47.118468] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:46.966 [2024-11-02 11:29:47.118489] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:46.966 [2024-11-02 11:29:47.119470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:46.966 [2024-11-02 11:29:47.119558] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:46.966 [2024-11-02 11:29:47.119588] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:46.966 [2024-11-02 11:29:47.120481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:46.966 [2024-11-02 11:29:47.120506] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:46.966 [2024-11-02 11:29:47.120571] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:46.966 [2024-11-02 11:29:47.121858] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:46.966 are Threshold: 0% 00:17:46.966 Life Percentage Used: 0% 00:17:46.966 Data Units Read: 0 00:17:46.966 Data Units Written: 0 00:17:46.966 Host Read Commands: 0 00:17:46.966 Host Write Commands: 0 00:17:46.966 Controller Busy Time: 0 minutes 00:17:46.966 Power Cycles: 0 00:17:46.966 Power On Hours: 0 hours 00:17:46.966 Unsafe Shutdowns: 0 00:17:46.966 Unrecoverable Media Errors: 0 00:17:46.966 Lifetime Error Log Entries: 0 00:17:46.966 Warning Temperature Time: 0 minutes 00:17:46.966 Critical Temperature Time: 0 minutes 00:17:46.966 00:17:46.966 Number of Queues 00:17:46.966 ================ 00:17:46.966 Number of I/O Submission Queues: 127 00:17:46.966 Number of I/O Completion Queues: 127 00:17:46.966 00:17:46.966 Active Namespaces 00:17:46.966 ================= 00:17:46.966 Namespace ID:1 00:17:46.967 Error Recovery Timeout: Unlimited 00:17:46.967 Command Set Identifier: NVM (00h) 00:17:46.967 Deallocate: Supported 00:17:46.967 Deallocated/Unwritten Error: Not Supported 00:17:46.967 Deallocated Read Value: Unknown 00:17:46.967 Deallocate in Write Zeroes: Not Supported 00:17:46.967 Deallocated Guard Field: 0xFFFF 00:17:46.967 Flush: Supported 00:17:46.967 Reservation: Supported 00:17:46.967 Namespace Sharing Capabilities: Multiple Controllers 00:17:46.967 Size (in LBAs): 131072 (0GiB) 00:17:46.967 Capacity (in LBAs): 131072 (0GiB) 00:17:46.967 Utilization (in LBAs): 131072 (0GiB) 00:17:46.967 NGUID: 36D3F05480264045A71E625DF7E74EC0 00:17:46.967 UUID: 36d3f054-8026-4045-a71e-625df7e74ec0 00:17:46.967 Thin Provisioning: Not Supported 00:17:46.967 Per-NS Atomic Units: Yes 00:17:46.967 Atomic Boundary Size (Normal): 0 00:17:46.967 Atomic Boundary Size (PFail): 0 00:17:46.967 Atomic Boundary Offset: 0 00:17:46.967 Maximum Single Source Range Length: 65535 00:17:46.967 Maximum Copy Length: 65535 00:17:46.967 Maximum Source Range Count: 1 00:17:46.967 NGUID/EUI64 Never Reused: No 00:17:46.967 Namespace Write Protected: No 00:17:46.967 Number of LBA Formats: 1 00:17:46.967 Current LBA Format: LBA Format #00 00:17:46.967 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:46.967 00:17:46.967 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:47.224 [2024-11-02 11:29:47.372955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:52.486 Initializing NVMe Controllers 00:17:52.486 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:52.486 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:52.486 Initialization complete. Launching workers. 00:17:52.486 ======================================================== 00:17:52.486 Latency(us) 00:17:52.486 Device Information : IOPS MiB/s Average min max 00:17:52.486 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33104.99 129.32 3865.92 1175.60 7808.14 00:17:52.486 ======================================================== 00:17:52.486 Total : 33104.99 129.32 3865.92 1175.60 7808.14 00:17:52.486 00:17:52.486 [2024-11-02 11:29:52.477615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:52.486 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:52.486 [2024-11-02 11:29:52.728340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:57.749 Initializing NVMe Controllers 00:17:57.749 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:57.749 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:57.749 Initialization complete. Launching workers. 00:17:57.749 ======================================================== 00:17:57.749 Latency(us) 00:17:57.749 Device Information : IOPS MiB/s Average min max 00:17:57.749 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31771.19 124.11 4030.50 1211.64 7535.41 00:17:57.749 ======================================================== 00:17:57.749 Total : 31771.19 124.11 4030.50 1211.64 7535.41 00:17:57.749 00:17:57.749 [2024-11-02 11:29:57.750882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:57.749 11:29:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:57.749 [2024-11-02 11:29:57.978975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.011 [2024-11-02 11:30:03.123401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.011 Initializing NVMe Controllers 00:18:03.011 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.011 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:03.011 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:03.011 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:03.011 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:03.011 Initialization complete. Launching workers. 00:18:03.011 Starting thread on core 2 00:18:03.011 Starting thread on core 3 00:18:03.011 Starting thread on core 1 00:18:03.011 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:03.270 [2024-11-02 11:30:03.445682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:06.553 [2024-11-02 11:30:06.518077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:06.553 Initializing NVMe Controllers 00:18:06.553 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:06.553 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:06.553 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:06.553 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:06.553 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:06.553 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:06.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:06.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:06.553 Initialization complete. Launching workers. 00:18:06.553 Starting thread on core 1 with urgent priority queue 00:18:06.553 Starting thread on core 2 with urgent priority queue 00:18:06.553 Starting thread on core 3 with urgent priority queue 00:18:06.553 Starting thread on core 0 with urgent priority queue 00:18:06.553 SPDK bdev Controller (SPDK2 ) core 0: 5327.67 IO/s 18.77 secs/100000 ios 00:18:06.553 SPDK bdev Controller (SPDK2 ) core 1: 6199.00 IO/s 16.13 secs/100000 ios 00:18:06.553 SPDK bdev Controller (SPDK2 ) core 2: 5361.33 IO/s 18.65 secs/100000 ios 00:18:06.553 SPDK bdev Controller (SPDK2 ) core 3: 5977.33 IO/s 16.73 secs/100000 ios 00:18:06.553 ======================================================== 00:18:06.553 00:18:06.553 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:06.553 [2024-11-02 11:30:06.830619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:06.553 Initializing NVMe Controllers 00:18:06.553 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:06.553 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:06.553 Namespace ID: 1 size: 0GB 00:18:06.553 Initialization complete. 00:18:06.553 INFO: using host memory buffer for IO 00:18:06.553 Hello world! 00:18:06.553 [2024-11-02 11:30:06.839732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:06.553 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:06.809 [2024-11-02 11:30:07.150015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.180 Initializing NVMe Controllers 00:18:08.180 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.180 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.180 Initialization complete. Launching workers. 00:18:08.180 submit (in ns) avg, min, max = 8056.9, 3507.8, 5012976.7 00:18:08.180 complete (in ns) avg, min, max = 25401.3, 2060.0, 4026348.9 00:18:08.180 00:18:08.180 Submit histogram 00:18:08.180 ================ 00:18:08.180 Range in us Cumulative Count 00:18:08.180 3.484 - 3.508: 0.0077% ( 1) 00:18:08.180 3.508 - 3.532: 0.2314% ( 29) 00:18:08.180 3.532 - 3.556: 1.1571% ( 120) 00:18:08.180 3.556 - 3.579: 3.2243% ( 268) 00:18:08.180 3.579 - 3.603: 7.2046% ( 516) 00:18:08.180 3.603 - 3.627: 14.6405% ( 964) 00:18:08.180 3.627 - 3.650: 23.4573% ( 1143) 00:18:08.180 3.650 - 3.674: 31.7417% ( 1074) 00:18:08.180 3.674 - 3.698: 38.1595% ( 832) 00:18:08.180 3.698 - 3.721: 46.1817% ( 1040) 00:18:08.180 3.721 - 3.745: 52.3681% ( 802) 00:18:08.180 3.745 - 3.769: 58.9710% ( 856) 00:18:08.180 3.769 - 3.793: 63.5683% ( 596) 00:18:08.180 3.793 - 3.816: 67.3172% ( 486) 00:18:08.180 3.816 - 3.840: 70.9195% ( 467) 00:18:08.180 3.840 - 3.864: 74.5063% ( 465) 00:18:08.180 3.864 - 3.887: 78.3786% ( 502) 00:18:08.180 3.887 - 3.911: 81.3792% ( 389) 00:18:08.180 3.911 - 3.935: 84.0481% ( 346) 00:18:08.180 3.935 - 3.959: 86.1925% ( 278) 00:18:08.180 3.959 - 3.982: 88.0592% ( 242) 00:18:08.180 3.982 - 4.006: 89.8411% ( 231) 00:18:08.180 4.006 - 4.030: 91.1216% ( 166) 00:18:08.180 4.030 - 4.053: 92.1629% ( 135) 00:18:08.180 4.053 - 4.077: 93.1117% ( 123) 00:18:08.180 4.077 - 4.101: 93.9911% ( 114) 00:18:08.180 4.101 - 4.124: 94.6004% ( 79) 00:18:08.180 4.124 - 4.148: 94.9861% ( 50) 00:18:08.180 4.148 - 4.172: 95.2330% ( 32) 00:18:08.180 4.172 - 4.196: 95.4952% ( 34) 00:18:08.180 4.196 - 4.219: 95.6649% ( 22) 00:18:08.180 4.219 - 4.243: 95.7498% ( 11) 00:18:08.180 4.243 - 4.267: 95.8038% ( 7) 00:18:08.180 4.267 - 4.290: 95.9580% ( 20) 00:18:08.180 4.290 - 4.314: 96.0352% ( 10) 00:18:08.180 4.314 - 4.338: 96.1046% ( 9) 00:18:08.180 4.338 - 4.361: 96.1740% ( 9) 00:18:08.180 4.361 - 4.385: 96.2203% ( 6) 00:18:08.180 4.385 - 4.409: 96.2434% ( 3) 00:18:08.180 4.409 - 4.433: 96.2589% ( 2) 00:18:08.180 4.433 - 4.456: 96.2743% ( 2) 00:18:08.180 4.456 - 4.480: 96.3283% ( 7) 00:18:08.180 4.480 - 4.504: 96.3669% ( 5) 00:18:08.180 4.504 - 4.527: 96.3900% ( 3) 00:18:08.180 4.527 - 4.551: 96.4131% ( 3) 00:18:08.180 4.551 - 4.575: 96.4363% ( 3) 00:18:08.180 4.575 - 4.599: 96.4517% ( 2) 00:18:08.180 4.599 - 4.622: 96.4671% ( 2) 00:18:08.180 4.622 - 4.646: 96.4980% ( 4) 00:18:08.180 4.646 - 4.670: 96.5134% ( 2) 00:18:08.180 4.670 - 4.693: 96.5288% ( 2) 00:18:08.180 4.693 - 4.717: 96.5366% ( 1) 00:18:08.180 4.717 - 4.741: 96.5520% ( 2) 00:18:08.180 4.741 - 4.764: 96.5828% ( 4) 00:18:08.180 4.764 - 4.788: 96.6214% ( 5) 00:18:08.180 4.788 - 4.812: 96.6368% ( 2) 00:18:08.180 4.812 - 4.836: 96.6677% ( 4) 00:18:08.180 4.836 - 4.859: 96.7140% ( 6) 00:18:08.180 4.859 - 4.883: 96.7603% ( 6) 00:18:08.180 4.883 - 4.907: 96.8065% ( 6) 00:18:08.180 4.907 - 4.930: 96.8374% ( 4) 00:18:08.180 4.930 - 4.954: 96.8914% ( 7) 00:18:08.180 4.954 - 4.978: 96.9300% ( 5) 00:18:08.180 4.978 - 5.001: 96.9608% ( 4) 00:18:08.180 5.001 - 5.025: 97.0071% ( 6) 00:18:08.180 5.025 - 5.049: 97.0457% ( 5) 00:18:08.180 5.049 - 5.073: 97.0842% ( 5) 00:18:08.180 5.073 - 5.096: 97.1382% ( 7) 00:18:08.180 5.096 - 5.120: 97.1614% ( 3) 00:18:08.180 5.120 - 5.144: 97.2077% ( 6) 00:18:08.180 5.144 - 5.167: 97.2616% ( 7) 00:18:08.180 5.167 - 5.191: 97.2771% ( 2) 00:18:08.180 5.191 - 5.215: 97.3234% ( 6) 00:18:08.180 5.215 - 5.239: 97.3542% ( 4) 00:18:08.181 5.239 - 5.262: 97.3928% ( 5) 00:18:08.181 5.262 - 5.286: 97.4082% ( 2) 00:18:08.181 5.286 - 5.310: 97.4468% ( 5) 00:18:08.181 5.310 - 5.333: 97.4699% ( 3) 00:18:08.181 5.333 - 5.357: 97.5162% ( 6) 00:18:08.181 5.357 - 5.381: 97.5471% ( 4) 00:18:08.181 5.381 - 5.404: 97.5702% ( 3) 00:18:08.181 5.404 - 5.428: 97.5856% ( 2) 00:18:08.181 5.428 - 5.452: 97.5933% ( 1) 00:18:08.181 5.452 - 5.476: 97.6088% ( 2) 00:18:08.181 5.476 - 5.499: 97.6165% ( 1) 00:18:08.181 5.499 - 5.523: 97.6396% ( 3) 00:18:08.181 5.547 - 5.570: 97.6473% ( 1) 00:18:08.181 5.594 - 5.618: 97.6782% ( 4) 00:18:08.181 5.618 - 5.641: 97.6859% ( 1) 00:18:08.181 5.641 - 5.665: 97.6936% ( 1) 00:18:08.181 5.665 - 5.689: 97.7090% ( 2) 00:18:08.181 5.689 - 5.713: 97.7322% ( 3) 00:18:08.181 5.736 - 5.760: 97.7476% ( 2) 00:18:08.181 5.760 - 5.784: 97.7553% ( 1) 00:18:08.181 5.784 - 5.807: 97.7630% ( 1) 00:18:08.181 5.831 - 5.855: 97.7707% ( 1) 00:18:08.181 5.855 - 5.879: 97.7785% ( 1) 00:18:08.181 5.879 - 5.902: 97.7939% ( 2) 00:18:08.181 5.926 - 5.950: 97.8016% ( 1) 00:18:08.181 5.973 - 5.997: 97.8093% ( 1) 00:18:08.181 6.021 - 6.044: 97.8170% ( 1) 00:18:08.181 6.044 - 6.068: 97.8325% ( 2) 00:18:08.181 6.068 - 6.116: 97.8479% ( 2) 00:18:08.181 6.163 - 6.210: 97.8710% ( 3) 00:18:08.181 6.210 - 6.258: 97.8787% ( 1) 00:18:08.181 6.258 - 6.305: 97.8865% ( 1) 00:18:08.181 6.400 - 6.447: 97.8942% ( 1) 00:18:08.181 6.590 - 6.637: 97.9019% ( 1) 00:18:08.181 6.684 - 6.732: 97.9250% ( 3) 00:18:08.181 6.732 - 6.779: 97.9327% ( 1) 00:18:08.181 6.779 - 6.827: 97.9405% ( 1) 00:18:08.181 6.921 - 6.969: 97.9482% ( 1) 00:18:08.181 7.064 - 7.111: 97.9636% ( 2) 00:18:08.181 7.159 - 7.206: 97.9713% ( 1) 00:18:08.181 7.206 - 7.253: 97.9790% ( 1) 00:18:08.181 7.253 - 7.301: 97.9867% ( 1) 00:18:08.181 7.348 - 7.396: 97.9944% ( 1) 00:18:08.181 7.396 - 7.443: 98.0176% ( 3) 00:18:08.181 7.443 - 7.490: 98.0330% ( 2) 00:18:08.181 7.490 - 7.538: 98.0562% ( 3) 00:18:08.181 7.538 - 7.585: 98.0716% ( 2) 00:18:08.181 7.680 - 7.727: 98.0793% ( 1) 00:18:08.181 7.727 - 7.775: 98.1024% ( 3) 00:18:08.181 7.775 - 7.822: 98.1102% ( 1) 00:18:08.181 7.822 - 7.870: 98.1179% ( 1) 00:18:08.181 7.917 - 7.964: 98.1333% ( 2) 00:18:08.181 7.964 - 8.012: 98.1564% ( 3) 00:18:08.181 8.012 - 8.059: 98.1641% ( 1) 00:18:08.181 8.107 - 8.154: 98.1719% ( 1) 00:18:08.181 8.154 - 8.201: 98.2027% ( 4) 00:18:08.181 8.201 - 8.249: 98.2259% ( 3) 00:18:08.181 8.249 - 8.296: 98.2490% ( 3) 00:18:08.181 8.439 - 8.486: 98.2644% ( 2) 00:18:08.181 8.486 - 8.533: 98.2721% ( 1) 00:18:08.181 8.533 - 8.581: 98.3030% ( 4) 00:18:08.181 8.581 - 8.628: 98.3184% ( 2) 00:18:08.181 8.628 - 8.676: 98.3338% ( 2) 00:18:08.181 8.770 - 8.818: 98.3493% ( 2) 00:18:08.181 8.818 - 8.865: 98.3570% ( 1) 00:18:08.181 8.865 - 8.913: 98.3647% ( 1) 00:18:08.181 8.913 - 8.960: 98.3878% ( 3) 00:18:08.181 8.960 - 9.007: 98.3956% ( 1) 00:18:08.181 9.007 - 9.055: 98.4033% ( 1) 00:18:08.181 9.055 - 9.102: 98.4110% ( 1) 00:18:08.181 9.150 - 9.197: 98.4264% ( 2) 00:18:08.181 9.244 - 9.292: 98.4341% ( 1) 00:18:08.181 9.339 - 9.387: 98.4418% ( 1) 00:18:08.181 9.387 - 9.434: 98.4496% ( 1) 00:18:08.181 9.434 - 9.481: 98.4573% ( 1) 00:18:08.181 9.529 - 9.576: 98.4650% ( 1) 00:18:08.181 9.576 - 9.624: 98.4804% ( 2) 00:18:08.181 9.624 - 9.671: 98.5113% ( 4) 00:18:08.181 9.671 - 9.719: 98.5190% ( 1) 00:18:08.181 9.719 - 9.766: 98.5267% ( 1) 00:18:08.181 9.766 - 9.813: 98.5421% ( 2) 00:18:08.181 9.813 - 9.861: 98.5498% ( 1) 00:18:08.181 9.861 - 9.908: 98.5653% ( 2) 00:18:08.181 9.908 - 9.956: 98.5807% ( 2) 00:18:08.181 10.193 - 10.240: 98.5884% ( 1) 00:18:08.181 10.240 - 10.287: 98.5961% ( 1) 00:18:08.181 10.382 - 10.430: 98.6038% ( 1) 00:18:08.181 10.430 - 10.477: 98.6115% ( 1) 00:18:08.181 10.477 - 10.524: 98.6270% ( 2) 00:18:08.181 10.667 - 10.714: 98.6424% ( 2) 00:18:08.181 10.761 - 10.809: 98.6501% ( 1) 00:18:08.181 10.809 - 10.856: 98.6578% ( 1) 00:18:08.181 10.856 - 10.904: 98.6732% ( 2) 00:18:08.181 10.951 - 10.999: 98.6810% ( 1) 00:18:08.181 10.999 - 11.046: 98.6887% ( 1) 00:18:08.181 11.046 - 11.093: 98.6964% ( 1) 00:18:08.181 11.093 - 11.141: 98.7118% ( 2) 00:18:08.181 11.141 - 11.188: 98.7350% ( 3) 00:18:08.181 11.330 - 11.378: 98.7427% ( 1) 00:18:08.181 11.378 - 11.425: 98.7504% ( 1) 00:18:08.181 11.473 - 11.520: 98.7658% ( 2) 00:18:08.181 11.662 - 11.710: 98.7735% ( 1) 00:18:08.181 11.757 - 11.804: 98.7812% ( 1) 00:18:08.181 11.804 - 11.852: 98.7890% ( 1) 00:18:08.181 11.947 - 11.994: 98.8121% ( 3) 00:18:08.181 11.994 - 12.041: 98.8198% ( 1) 00:18:08.181 12.041 - 12.089: 98.8275% ( 1) 00:18:08.181 12.136 - 12.231: 98.8352% ( 1) 00:18:08.181 12.231 - 12.326: 98.8584% ( 3) 00:18:08.181 12.326 - 12.421: 98.8661% ( 1) 00:18:08.181 12.516 - 12.610: 98.8815% ( 2) 00:18:08.181 12.610 - 12.705: 98.8969% ( 2) 00:18:08.181 12.705 - 12.800: 98.9124% ( 2) 00:18:08.181 12.800 - 12.895: 98.9201% ( 1) 00:18:08.181 12.895 - 12.990: 98.9278% ( 1) 00:18:08.181 13.084 - 13.179: 98.9432% ( 2) 00:18:08.181 13.274 - 13.369: 98.9509% ( 1) 00:18:08.181 13.464 - 13.559: 98.9587% ( 1) 00:18:08.181 13.559 - 13.653: 98.9664% ( 1) 00:18:08.181 13.653 - 13.748: 98.9895% ( 3) 00:18:08.181 13.748 - 13.843: 99.0127% ( 3) 00:18:08.181 13.843 - 13.938: 99.0204% ( 1) 00:18:08.181 13.938 - 14.033: 99.0512% ( 4) 00:18:08.181 14.033 - 14.127: 99.0589% ( 1) 00:18:08.181 14.127 - 14.222: 99.0744% ( 2) 00:18:08.181 14.222 - 14.317: 99.0898% ( 2) 00:18:08.181 14.317 - 14.412: 99.0975% ( 1) 00:18:08.181 14.412 - 14.507: 99.1206% ( 3) 00:18:08.181 14.507 - 14.601: 99.1284% ( 1) 00:18:08.181 14.696 - 14.791: 99.1438% ( 2) 00:18:08.181 14.791 - 14.886: 99.1515% ( 1) 00:18:08.181 15.170 - 15.265: 99.1592% ( 1) 00:18:08.181 15.550 - 15.644: 99.1669% ( 1) 00:18:08.181 16.119 - 16.213: 99.1746% ( 1) 00:18:08.181 17.161 - 17.256: 99.1824% ( 1) 00:18:08.181 17.256 - 17.351: 99.1978% ( 2) 00:18:08.181 17.351 - 17.446: 99.2132% ( 2) 00:18:08.181 17.541 - 17.636: 99.2286% ( 2) 00:18:08.181 17.636 - 17.730: 99.2518% ( 3) 00:18:08.181 17.730 - 17.825: 99.2672% ( 2) 00:18:08.181 17.825 - 17.920: 99.3366% ( 9) 00:18:08.181 17.920 - 18.015: 99.3675% ( 4) 00:18:08.181 18.015 - 18.110: 99.3906% ( 3) 00:18:08.181 18.110 - 18.204: 99.4292% ( 5) 00:18:08.181 18.204 - 18.299: 99.4678% ( 5) 00:18:08.181 18.299 - 18.394: 99.5063% ( 5) 00:18:08.181 18.394 - 18.489: 99.5757% ( 9) 00:18:08.181 18.489 - 18.584: 99.6375% ( 8) 00:18:08.181 18.584 - 18.679: 99.6915% ( 7) 00:18:08.181 18.679 - 18.773: 99.7069% ( 2) 00:18:08.181 18.773 - 18.868: 99.7454% ( 5) 00:18:08.181 18.868 - 18.963: 99.7686% ( 3) 00:18:08.181 18.963 - 19.058: 99.8072% ( 5) 00:18:08.181 19.058 - 19.153: 99.8149% ( 1) 00:18:08.181 19.153 - 19.247: 99.8226% ( 1) 00:18:08.181 19.247 - 19.342: 99.8303% ( 1) 00:18:08.181 19.342 - 19.437: 99.8457% ( 2) 00:18:08.181 19.437 - 19.532: 99.8534% ( 1) 00:18:08.181 19.627 - 19.721: 99.8612% ( 1) 00:18:08.181 19.721 - 19.816: 99.8689% ( 1) 00:18:08.181 20.670 - 20.764: 99.8766% ( 1) 00:18:08.181 25.600 - 25.790: 99.8843% ( 1) 00:18:08.181 28.065 - 28.255: 99.8920% ( 1) 00:18:08.181 28.824 - 29.013: 99.8997% ( 1) 00:18:08.181 3786.524 - 3810.797: 99.9074% ( 1) 00:18:08.181 3980.705 - 4004.978: 99.9537% ( 6) 00:18:08.181 4004.978 - 4029.250: 99.9846% ( 4) 00:18:08.181 4053.523 - 4077.796: 99.9923% ( 1) 00:18:08.181 5000.154 - 5024.427: 100.0000% ( 1) 00:18:08.181 00:18:08.181 Complete histogram 00:18:08.181 ================== 00:18:08.181 Range in us Cumulative Count 00:18:08.181 2.050 - 2.062: 0.0154% ( 2) 00:18:08.181 2.062 - 2.074: 9.3721% ( 1213) 00:18:08.181 2.074 - 2.086: 24.3906% ( 1947) 00:18:08.181 2.086 - 2.098: 25.7097% ( 171) 00:18:08.181 2.098 - 2.110: 41.0521% ( 1989) 00:18:08.181 2.110 - 2.121: 47.4853% ( 834) 00:18:08.181 2.121 - 2.133: 48.9972% ( 196) 00:18:08.181 2.133 - 2.145: 60.1589% ( 1447) 00:18:08.181 2.145 - 2.157: 66.2373% ( 788) 00:18:08.181 2.157 - 2.169: 69.0528% ( 365) 00:18:08.181 2.169 - 2.181: 78.8954% ( 1276) 00:18:08.181 2.181 - 2.193: 81.9500% ( 396) 00:18:08.181 2.193 - 2.204: 82.9065% ( 124) 00:18:08.181 2.204 - 2.216: 86.2311% ( 431) 00:18:08.181 2.216 - 2.228: 89.0774% ( 369) 00:18:08.181 2.228 - 2.240: 89.9645% ( 115) 00:18:08.181 2.240 - 2.252: 92.5255% ( 332) 00:18:08.181 2.252 - 2.264: 93.5977% ( 139) 00:18:08.181 2.264 - 2.276: 93.8985% ( 39) 00:18:08.182 2.276 - 2.287: 94.2302% ( 43) 00:18:08.182 2.287 - 2.299: 94.8010% ( 74) 00:18:08.182 2.299 - 2.311: 95.0787% ( 36) 00:18:08.182 2.311 - 2.323: 95.2021% ( 16) 00:18:08.182 2.323 - 2.335: 95.3178% ( 15) 00:18:08.182 2.335 - 2.347: 95.4258% ( 14) 00:18:08.182 2.347 - 2.359: 95.5029% ( 10) 00:18:08.182 2.359 - 2.370: 95.6958% ( 25) 00:18:08.182 2.370 - 2.382: 95.8732% ( 23) 00:18:08.182 2.382 - 2.394: 96.0583% ( 24) 00:18:08.182 2.394 - 2.406: 96.2126% ( 20) 00:18:08.182 2.406 - 2.418: 96.3437% ( 17) 00:18:08.182 2.418 - 2.430: 96.5443% ( 26) 00:18:08.182 2.430 - 2.441: 96.7757% ( 30) 00:18:08.182 2.441 - 2.453: 96.9454% ( 22) 00:18:08.182 2.453 - 2.465: 97.1228% ( 23) 00:18:08.182 2.465 - 2.477: 97.3002% ( 23) 00:18:08.182 2.477 - 2.489: 97.4082% ( 14) 00:18:08.182 2.489 - 2.501: 97.5316% ( 16) 00:18:08.182 2.501 - 2.513: 97.6242% ( 12) 00:18:08.182 2.513 - 2.524: 97.6936% ( 9) 00:18:08.182 2.524 - 2.536: 97.7707% ( 10) 00:18:08.182 2.536 - 2.548: 97.8325% ( 8) 00:18:08.182 2.548 - 2.560: 97.8787% ( 6) 00:18:08.182 2.560 - 2.572: 97.9559% ( 10) 00:18:08.182 2.572 - 2.584: 98.0330% ( 10) 00:18:08.182 2.584 - 2.596: 98.0793% ( 6) 00:18:08.182 2.607 - 2.619: 98.0870% ( 1) 00:18:08.182 2.619 - 2.631: 98.0947% ( 1) 00:18:08.182 2.631 - 2.643: 98.1024% ( 1) 00:18:08.182 2.643 - 2.655: 98.1102% ( 1) 00:18:08.182 2.655 - 2.667: 98.1179% ( 1) 00:18:08.182 2.667 - 2.679: 98.1256% ( 1) 00:18:08.182 2.726 - 2.738: 98.1487% ( 3) 00:18:08.182 2.750 - 2.761: 98.1641% ( 2) 00:18:08.182 2.761 - 2.773: 98.1719% ( 1) 00:18:08.182 2.797 - 2.809: 98.1796% ( 1) 00:18:08.182 2.809 - 2.821: 98.1950% ( 2) 00:18:08.182 2.833 - 2.844: 98.2027% ( 1) 00:18:08.182 2.844 - 2.856: 98.2104% ( 1) 00:18:08.182 2.856 - 2.868: 98.2181% ( 1) 00:18:08.182 2.868 - 2.880: 98.2259% ( 1) 00:18:08.182 2.880 - 2.892: 98.2336% ( 1) 00:18:08.182 2.892 - 2.904: 98.2413% ( 1) 00:18:08.182 2.904 - 2.916: 98.2490% ( 1) 00:18:08.182 2.916 - 2.927: 98.2567% ( 1) 00:18:08.182 2.927 - 2.939: 98.2799% ( 3) 00:18:08.182 2.975 - 2.987: 98.2876% ( 1) 00:18:08.182 2.987 - 2.999: 98.2953% ( 1) 00:18:08.182 2.999 - 3.010: 98.3107% ( 2) 00:18:08.182 3.010 - 3.022: 98.3184% ( 1) 00:18:08.182 3.034 - 3.058: 98.3338% ( 2) 00:18:08.182 3.058 - 3.081: 98.3416% ( 1) 00:18:08.182 3.105 - 3.129: 98.3570% ( 2) 00:18:08.182 3.200 - 3.224: 98.3647% ( 1) 00:18:08.182 3.224 - 3.247: 98.3801% ( 2) 00:18:08.182 3.247 - 3.271: 98.3956% ( 2) 00:18:08.182 3.271 - 3.295: 98.4110% ( 2) 00:18:08.182 3.295 - 3.319: 98.4187% ( 1) 00:18:08.182 3.319 - 3.342: 98.4418% ( 3) 00:18:08.182 3.342 - 3.366: 98.4573% ( 2) 00:18:08.182 3.366 - 3.390: 98.4650% ( 1) 00:18:08.182 3.390 - 3.413: 98.4727% ( 1) 00:18:08.182 3.413 - 3.437: 98.4804% ( 1) 00:18:08.182 3.437 - 3.461: 98.4958% ( 2) 00:18:08.182 3.484 - 3.508: 98.5035% ( 1) 00:18:08.182 3.508 - 3.532: 98.5113% ( 1) 00:18:08.182 3.532 - 3.556: 98.5267% ( 2) 00:18:08.182 3.556 - 3.579: 98.5344% ( 1) 00:18:08.182 3.579 - 3.603: 98.5575% ( 3) 00:18:08.182 3.674 - 3.698: 98.5653% ( 1) 00:18:08.182 3.698 - 3.721: 98.5884% ( 3) 00:18:08.182 3.721 - 3.745: 98.6193% ( 4) 00:18:08.182 3.769 - 3.793: 98.6270% ( 1) 00:18:08.182 3.864 - 3.887: 98.6501% ( 3) 00:18:08.182 3.887 - 3.911: 98.6732% ( 3) 00:18:08.182 3.959 - 3.982: 98.6810%[2024-11-02 11:30:08.252271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.182 ( 1) 00:18:08.182 4.030 - 4.053: 98.6887% ( 1) 00:18:08.182 4.053 - 4.077: 98.7118% ( 3) 00:18:08.182 4.101 - 4.124: 98.7195% ( 1) 00:18:08.182 4.148 - 4.172: 98.7272% ( 1) 00:18:08.182 4.196 - 4.219: 98.7350% ( 1) 00:18:08.182 4.219 - 4.243: 98.7504% ( 2) 00:18:08.182 4.290 - 4.314: 98.7581% ( 1) 00:18:08.182 4.504 - 4.527: 98.7658% ( 1) 00:18:08.182 5.096 - 5.120: 98.7735% ( 1) 00:18:08.182 5.523 - 5.547: 98.7967% ( 3) 00:18:08.182 5.570 - 5.594: 98.8044% ( 1) 00:18:08.182 5.665 - 5.689: 98.8121% ( 1) 00:18:08.182 5.736 - 5.760: 98.8198% ( 1) 00:18:08.182 6.021 - 6.044: 98.8275% ( 1) 00:18:08.182 6.305 - 6.353: 98.8352% ( 1) 00:18:08.182 6.495 - 6.542: 98.8429% ( 1) 00:18:08.182 6.590 - 6.637: 98.8507% ( 1) 00:18:08.182 6.969 - 7.016: 98.8584% ( 1) 00:18:08.182 7.159 - 7.206: 98.8661% ( 1) 00:18:08.182 7.443 - 7.490: 98.8738% ( 1) 00:18:08.182 7.964 - 8.012: 98.8815% ( 1) 00:18:08.182 8.344 - 8.391: 98.8892% ( 1) 00:18:08.182 12.089 - 12.136: 98.8969% ( 1) 00:18:08.182 15.739 - 15.834: 98.9124% ( 2) 00:18:08.182 15.834 - 15.929: 98.9278% ( 2) 00:18:08.182 15.929 - 16.024: 98.9432% ( 2) 00:18:08.182 16.024 - 16.119: 98.9895% ( 6) 00:18:08.182 16.119 - 16.213: 98.9972% ( 1) 00:18:08.182 16.213 - 16.308: 99.0204% ( 3) 00:18:08.182 16.308 - 16.403: 99.0589% ( 5) 00:18:08.182 16.403 - 16.498: 99.0821% ( 3) 00:18:08.182 16.498 - 16.593: 99.1206% ( 5) 00:18:08.182 16.593 - 16.687: 99.1669% ( 6) 00:18:08.182 16.687 - 16.782: 99.2055% ( 5) 00:18:08.182 16.782 - 16.877: 99.2518% ( 6) 00:18:08.182 16.972 - 17.067: 99.2595% ( 1) 00:18:08.182 17.161 - 17.256: 99.2672% ( 1) 00:18:08.182 17.256 - 17.351: 99.2826% ( 2) 00:18:08.182 17.351 - 17.446: 99.2981% ( 2) 00:18:08.182 17.446 - 17.541: 99.3058% ( 1) 00:18:08.182 17.730 - 17.825: 99.3135% ( 1) 00:18:08.182 17.825 - 17.920: 99.3366% ( 3) 00:18:08.182 18.015 - 18.110: 99.3443% ( 1) 00:18:08.182 18.110 - 18.204: 99.3521% ( 1) 00:18:08.182 18.204 - 18.299: 99.3675% ( 2) 00:18:08.182 18.299 - 18.394: 99.3752% ( 1) 00:18:08.182 18.489 - 18.584: 99.3829% ( 1) 00:18:08.182 18.679 - 18.773: 99.3906% ( 1) 00:18:08.182 19.153 - 19.247: 99.3983% ( 1) 00:18:08.182 19.342 - 19.437: 99.4060% ( 1) 00:18:08.182 25.221 - 25.410: 99.4138% ( 1) 00:18:08.182 45.701 - 45.890: 99.4215% ( 1) 00:18:08.182 3980.705 - 4004.978: 99.7763% ( 46) 00:18:08.182 4004.978 - 4029.250: 100.0000% ( 29) 00:18:08.182 00:18:08.182 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:08.182 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:08.182 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:08.182 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:08.182 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.182 [ 00:18:08.182 { 00:18:08.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.182 "subtype": "Discovery", 00:18:08.182 "listen_addresses": [], 00:18:08.182 "allow_any_host": true, 00:18:08.182 "hosts": [] 00:18:08.182 }, 00:18:08.182 { 00:18:08.182 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:08.182 "subtype": "NVMe", 00:18:08.182 "listen_addresses": [ 00:18:08.182 { 00:18:08.182 "trtype": "VFIOUSER", 00:18:08.182 "adrfam": "IPv4", 00:18:08.182 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:08.182 "trsvcid": "0" 00:18:08.182 } 00:18:08.182 ], 00:18:08.182 "allow_any_host": true, 00:18:08.182 "hosts": [], 00:18:08.182 "serial_number": "SPDK1", 00:18:08.182 "model_number": "SPDK bdev Controller", 00:18:08.182 "max_namespaces": 32, 00:18:08.182 "min_cntlid": 1, 00:18:08.182 "max_cntlid": 65519, 00:18:08.182 "namespaces": [ 00:18:08.182 { 00:18:08.182 "nsid": 1, 00:18:08.182 "bdev_name": "Malloc1", 00:18:08.182 "name": "Malloc1", 00:18:08.182 "nguid": "E96D8518C83B46DB82F71A199127197E", 00:18:08.182 "uuid": "e96d8518-c83b-46db-82f7-1a199127197e" 00:18:08.182 }, 00:18:08.182 { 00:18:08.182 "nsid": 2, 00:18:08.182 "bdev_name": "Malloc3", 00:18:08.182 "name": "Malloc3", 00:18:08.182 "nguid": "463136F23BB54DFCACF6FDA0892D6936", 00:18:08.182 "uuid": "463136f2-3bb5-4dfc-acf6-fda0892d6936" 00:18:08.182 } 00:18:08.182 ] 00:18:08.182 }, 00:18:08.182 { 00:18:08.182 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:08.182 "subtype": "NVMe", 00:18:08.182 "listen_addresses": [ 00:18:08.182 { 00:18:08.182 "trtype": "VFIOUSER", 00:18:08.182 "adrfam": "IPv4", 00:18:08.182 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:08.182 "trsvcid": "0" 00:18:08.182 } 00:18:08.182 ], 00:18:08.183 "allow_any_host": true, 00:18:08.183 "hosts": [], 00:18:08.183 "serial_number": "SPDK2", 00:18:08.183 "model_number": "SPDK bdev Controller", 00:18:08.183 "max_namespaces": 32, 00:18:08.183 "min_cntlid": 1, 00:18:08.183 "max_cntlid": 65519, 00:18:08.183 "namespaces": [ 00:18:08.183 { 00:18:08.183 "nsid": 1, 00:18:08.183 "bdev_name": "Malloc2", 00:18:08.183 "name": "Malloc2", 00:18:08.183 "nguid": "36D3F05480264045A71E625DF7E74EC0", 00:18:08.183 "uuid": "36d3f054-8026-4045-a71e-625df7e74ec0" 00:18:08.183 } 00:18:08.183 ] 00:18:08.183 } 00:18:08.183 ] 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3814511 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:08.440 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:08.440 [2024-11-02 11:30:08.755055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.698 Malloc4 00:18:08.698 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:08.956 [2024-11-02 11:30:09.186206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.956 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.956 Asynchronous Event Request test 00:18:08.956 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.956 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:08.956 Registering asynchronous event callbacks... 00:18:08.956 Starting namespace attribute notice tests for all controllers... 00:18:08.956 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:08.956 aer_cb - Changed Namespace 00:18:08.956 Cleaning up... 00:18:09.214 [ 00:18:09.214 { 00:18:09.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.214 "subtype": "Discovery", 00:18:09.214 "listen_addresses": [], 00:18:09.214 "allow_any_host": true, 00:18:09.214 "hosts": [] 00:18:09.214 }, 00:18:09.214 { 00:18:09.214 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.214 "subtype": "NVMe", 00:18:09.214 "listen_addresses": [ 00:18:09.214 { 00:18:09.214 "trtype": "VFIOUSER", 00:18:09.214 "adrfam": "IPv4", 00:18:09.214 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.214 "trsvcid": "0" 00:18:09.214 } 00:18:09.214 ], 00:18:09.214 "allow_any_host": true, 00:18:09.214 "hosts": [], 00:18:09.214 "serial_number": "SPDK1", 00:18:09.214 "model_number": "SPDK bdev Controller", 00:18:09.214 "max_namespaces": 32, 00:18:09.214 "min_cntlid": 1, 00:18:09.214 "max_cntlid": 65519, 00:18:09.214 "namespaces": [ 00:18:09.214 { 00:18:09.214 "nsid": 1, 00:18:09.214 "bdev_name": "Malloc1", 00:18:09.214 "name": "Malloc1", 00:18:09.214 "nguid": "E96D8518C83B46DB82F71A199127197E", 00:18:09.214 "uuid": "e96d8518-c83b-46db-82f7-1a199127197e" 00:18:09.214 }, 00:18:09.214 { 00:18:09.214 "nsid": 2, 00:18:09.214 "bdev_name": "Malloc3", 00:18:09.214 "name": "Malloc3", 00:18:09.214 "nguid": "463136F23BB54DFCACF6FDA0892D6936", 00:18:09.214 "uuid": "463136f2-3bb5-4dfc-acf6-fda0892d6936" 00:18:09.214 } 00:18:09.214 ] 00:18:09.214 }, 00:18:09.214 { 00:18:09.214 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.214 "subtype": "NVMe", 00:18:09.214 "listen_addresses": [ 00:18:09.214 { 00:18:09.214 "trtype": "VFIOUSER", 00:18:09.214 "adrfam": "IPv4", 00:18:09.214 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.214 "trsvcid": "0" 00:18:09.214 } 00:18:09.214 ], 00:18:09.214 "allow_any_host": true, 00:18:09.214 "hosts": [], 00:18:09.214 "serial_number": "SPDK2", 00:18:09.214 "model_number": "SPDK bdev Controller", 00:18:09.214 "max_namespaces": 32, 00:18:09.214 "min_cntlid": 1, 00:18:09.214 "max_cntlid": 65519, 00:18:09.214 "namespaces": [ 00:18:09.214 { 00:18:09.214 "nsid": 1, 00:18:09.214 "bdev_name": "Malloc2", 00:18:09.214 "name": "Malloc2", 00:18:09.214 "nguid": "36D3F05480264045A71E625DF7E74EC0", 00:18:09.214 "uuid": "36d3f054-8026-4045-a71e-625df7e74ec0" 00:18:09.214 }, 00:18:09.214 { 00:18:09.214 "nsid": 2, 00:18:09.214 "bdev_name": "Malloc4", 00:18:09.214 "name": "Malloc4", 00:18:09.214 "nguid": "9CD7CE28B4114831B52A62E1B294E61E", 00:18:09.214 "uuid": "9cd7ce28-b411-4831-b52a-62e1b294e61e" 00:18:09.214 } 00:18:09.214 ] 00:18:09.214 } 00:18:09.214 ] 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3814511 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3808391 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3808391 ']' 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3808391 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3808391 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3808391' 00:18:09.214 killing process with pid 3808391 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3808391 00:18:09.214 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3808391 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3814759 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3814759' 00:18:09.472 Process pid: 3814759 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3814759 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3814759 ']' 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:09.472 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:09.472 [2024-11-02 11:30:09.854676] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:09.472 [2024-11-02 11:30:09.855693] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:18:09.472 [2024-11-02 11:30:09.855758] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.730 [2024-11-02 11:30:09.922077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.730 [2024-11-02 11:30:09.968804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.730 [2024-11-02 11:30:09.968857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.730 [2024-11-02 11:30:09.968885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.731 [2024-11-02 11:30:09.968896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.731 [2024-11-02 11:30:09.968905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.731 [2024-11-02 11:30:09.970336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.731 [2024-11-02 11:30:09.970398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.731 [2024-11-02 11:30:09.970467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.731 [2024-11-02 11:30:09.970470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.731 [2024-11-02 11:30:10.060517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:09.731 [2024-11-02 11:30:10.060748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:09.731 [2024-11-02 11:30:10.061093] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:09.731 [2024-11-02 11:30:10.061768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:09.731 [2024-11-02 11:30:10.062004] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:09.731 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:09.731 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:18:09.731 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:11.107 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:11.107 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:11.107 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:11.107 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:11.107 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:11.107 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:11.366 Malloc1 00:18:11.366 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:11.624 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:12.189 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:12.446 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:12.446 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:12.446 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:12.705 Malloc2 00:18:12.705 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:12.963 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:13.221 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3814759 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3814759 ']' 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3814759 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3814759 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3814759' 00:18:13.479 killing process with pid 3814759 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3814759 00:18:13.479 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3814759 00:18:13.737 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:13.737 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:13.737 00:18:13.737 real 0m53.656s 00:18:13.737 user 3m27.729s 00:18:13.737 sys 0m3.964s 00:18:13.737 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:13.737 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:13.737 ************************************ 00:18:13.737 END TEST nvmf_vfio_user 00:18:13.737 ************************************ 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.737 ************************************ 00:18:13.737 START TEST nvmf_vfio_user_nvme_compliance 00:18:13.737 ************************************ 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:13.737 * Looking for test storage... 00:18:13.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:18:13.737 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.996 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.997 --rc genhtml_branch_coverage=1 00:18:13.997 --rc genhtml_function_coverage=1 00:18:13.997 --rc genhtml_legend=1 00:18:13.997 --rc geninfo_all_blocks=1 00:18:13.997 --rc geninfo_unexecuted_blocks=1 00:18:13.997 00:18:13.997 ' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.997 --rc genhtml_branch_coverage=1 00:18:13.997 --rc genhtml_function_coverage=1 00:18:13.997 --rc genhtml_legend=1 00:18:13.997 --rc geninfo_all_blocks=1 00:18:13.997 --rc geninfo_unexecuted_blocks=1 00:18:13.997 00:18:13.997 ' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.997 --rc genhtml_branch_coverage=1 00:18:13.997 --rc genhtml_function_coverage=1 00:18:13.997 --rc genhtml_legend=1 00:18:13.997 --rc geninfo_all_blocks=1 00:18:13.997 --rc geninfo_unexecuted_blocks=1 00:18:13.997 00:18:13.997 ' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.997 --rc genhtml_branch_coverage=1 00:18:13.997 --rc genhtml_function_coverage=1 00:18:13.997 --rc genhtml_legend=1 00:18:13.997 --rc geninfo_all_blocks=1 00:18:13.997 --rc geninfo_unexecuted_blocks=1 00:18:13.997 00:18:13.997 ' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:13.997 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3815361 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3815361' 00:18:13.998 Process pid: 3815361 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3815361 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3815361 ']' 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.998 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:13.998 [2024-11-02 11:30:14.253837] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:18:13.998 [2024-11-02 11:30:14.253942] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.998 [2024-11-02 11:30:14.319756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.998 [2024-11-02 11:30:14.365808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.998 [2024-11-02 11:30:14.365862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.998 [2024-11-02 11:30:14.365890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.998 [2024-11-02 11:30:14.365901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.998 [2024-11-02 11:30:14.365910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.998 [2024-11-02 11:30:14.367241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.998 [2024-11-02 11:30:14.367308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.998 [2024-11-02 11:30:14.367312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.256 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.256 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:18:14.256 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 malloc0 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 11:30:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:15.479 00:18:15.479 00:18:15.479 CUnit - A unit testing framework for C - Version 2.1-3 00:18:15.479 http://cunit.sourceforge.net/ 00:18:15.479 00:18:15.479 00:18:15.479 Suite: nvme_compliance 00:18:15.479 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-02 11:30:15.746801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.479 [2024-11-02 11:30:15.748198] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:15.479 [2024-11-02 11:30:15.748222] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:15.479 [2024-11-02 11:30:15.748249] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:15.479 [2024-11-02 11:30:15.749822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.479 passed 00:18:15.479 Test: admin_identify_ctrlr_verify_fused ...[2024-11-02 11:30:15.837431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.479 [2024-11-02 11:30:15.840453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.760 passed 00:18:15.760 Test: admin_identify_ns ...[2024-11-02 11:30:15.927853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.760 [2024-11-02 11:30:15.987279] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:15.760 [2024-11-02 11:30:15.995271] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:15.760 [2024-11-02 11:30:16.016413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.760 passed 00:18:15.760 Test: admin_get_features_mandatory_features ...[2024-11-02 11:30:16.099550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.760 [2024-11-02 11:30:16.102566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.760 passed 00:18:16.018 Test: admin_get_features_optional_features ...[2024-11-02 11:30:16.187102] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.018 [2024-11-02 11:30:16.190126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.018 passed 00:18:16.018 Test: admin_set_features_number_of_queues ...[2024-11-02 11:30:16.274759] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.018 [2024-11-02 11:30:16.380383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.018 passed 00:18:16.276 Test: admin_get_log_page_mandatory_logs ...[2024-11-02 11:30:16.461952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.276 [2024-11-02 11:30:16.464978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.276 passed 00:18:16.276 Test: admin_get_log_page_with_lpo ...[2024-11-02 11:30:16.550141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.276 [2024-11-02 11:30:16.615272] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:16.276 [2024-11-02 11:30:16.628331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.276 passed 00:18:16.533 Test: fabric_property_get ...[2024-11-02 11:30:16.712441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.533 [2024-11-02 11:30:16.713738] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:16.533 [2024-11-02 11:30:16.715464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.533 passed 00:18:16.533 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-02 11:30:16.800017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.533 [2024-11-02 11:30:16.801321] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:16.533 [2024-11-02 11:30:16.803041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.533 passed 00:18:16.533 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-02 11:30:16.883144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.791 [2024-11-02 11:30:16.968282] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:16.791 [2024-11-02 11:30:16.984273] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:16.791 [2024-11-02 11:30:16.989385] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.791 passed 00:18:16.791 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-02 11:30:17.072901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:16.791 [2024-11-02 11:30:17.074186] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:16.791 [2024-11-02 11:30:17.075921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:16.791 passed 00:18:16.791 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-02 11:30:17.156752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.048 [2024-11-02 11:30:17.236274] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:17.048 [2024-11-02 11:30:17.260270] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:17.048 [2024-11-02 11:30:17.265388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.048 passed 00:18:17.048 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-02 11:30:17.347882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.048 [2024-11-02 11:30:17.349167] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:17.048 [2024-11-02 11:30:17.349221] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:17.048 [2024-11-02 11:30:17.350905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.048 passed 00:18:17.048 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-02 11:30:17.432823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.306 [2024-11-02 11:30:17.525277] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:17.306 [2024-11-02 11:30:17.533282] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:17.306 [2024-11-02 11:30:17.541281] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:17.306 [2024-11-02 11:30:17.549278] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:17.306 [2024-11-02 11:30:17.578375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.306 passed 00:18:17.306 Test: admin_create_io_sq_verify_pc ...[2024-11-02 11:30:17.660881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:17.306 [2024-11-02 11:30:17.677282] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:17.306 [2024-11-02 11:30:17.695262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:17.563 passed 00:18:17.563 Test: admin_create_io_qp_max_qps ...[2024-11-02 11:30:17.777803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:18.496 [2024-11-02 11:30:18.882272] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:19.061 [2024-11-02 11:30:19.269595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:19.061 passed 00:18:19.061 Test: admin_create_io_sq_shared_cq ...[2024-11-02 11:30:19.351853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:19.319 [2024-11-02 11:30:19.484271] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:19.319 [2024-11-02 11:30:19.521373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:19.319 passed 00:18:19.319 00:18:19.319 Run Summary: Type Total Ran Passed Failed Inactive 00:18:19.319 suites 1 1 n/a 0 0 00:18:19.319 tests 18 18 18 0 0 00:18:19.319 asserts 360 360 360 0 n/a 00:18:19.319 00:18:19.319 Elapsed time = 1.563 seconds 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3815361 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3815361 ']' 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3815361 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3815361 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3815361' 00:18:19.319 killing process with pid 3815361 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3815361 00:18:19.319 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3815361 00:18:19.576 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:19.576 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:19.576 00:18:19.576 real 0m5.794s 00:18:19.576 user 0m16.356s 00:18:19.576 sys 0m0.541s 00:18:19.576 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:19.576 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:19.576 ************************************ 00:18:19.577 END TEST nvmf_vfio_user_nvme_compliance 00:18:19.577 ************************************ 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.577 ************************************ 00:18:19.577 START TEST nvmf_vfio_user_fuzz 00:18:19.577 ************************************ 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:19.577 * Looking for test storage... 00:18:19.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:18:19.577 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:19.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.835 --rc genhtml_branch_coverage=1 00:18:19.835 --rc genhtml_function_coverage=1 00:18:19.835 --rc genhtml_legend=1 00:18:19.835 --rc geninfo_all_blocks=1 00:18:19.835 --rc geninfo_unexecuted_blocks=1 00:18:19.835 00:18:19.835 ' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:19.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.835 --rc genhtml_branch_coverage=1 00:18:19.835 --rc genhtml_function_coverage=1 00:18:19.835 --rc genhtml_legend=1 00:18:19.835 --rc geninfo_all_blocks=1 00:18:19.835 --rc geninfo_unexecuted_blocks=1 00:18:19.835 00:18:19.835 ' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:19.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.835 --rc genhtml_branch_coverage=1 00:18:19.835 --rc genhtml_function_coverage=1 00:18:19.835 --rc genhtml_legend=1 00:18:19.835 --rc geninfo_all_blocks=1 00:18:19.835 --rc geninfo_unexecuted_blocks=1 00:18:19.835 00:18:19.835 ' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:19.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.835 --rc genhtml_branch_coverage=1 00:18:19.835 --rc genhtml_function_coverage=1 00:18:19.835 --rc genhtml_legend=1 00:18:19.835 --rc geninfo_all_blocks=1 00:18:19.835 --rc geninfo_unexecuted_blocks=1 00:18:19.835 00:18:19.835 ' 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.835 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3816095 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3816095' 00:18:19.836 Process pid: 3816095 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3816095 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3816095 ']' 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.836 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:20.094 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:20.094 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:18:20.094 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.028 malloc0 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.028 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:21.286 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:53.353 Fuzzing completed. Shutting down the fuzz application 00:18:53.353 00:18:53.353 Dumping successful admin opcodes: 00:18:53.353 8, 9, 10, 24, 00:18:53.353 Dumping successful io opcodes: 00:18:53.353 0, 00:18:53.353 NS: 0x20000081ef00 I/O qp, Total commands completed: 621467, total successful commands: 2405, random_seed: 1392487168 00:18:53.353 NS: 0x20000081ef00 admin qp, Total commands completed: 79994, total successful commands: 631, random_seed: 2779161024 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3816095 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3816095 ']' 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3816095 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3816095 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3816095' 00:18:53.353 killing process with pid 3816095 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3816095 00:18:53.353 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3816095 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:53.353 00:18:53.353 real 0m32.205s 00:18:53.353 user 0m31.448s 00:18:53.353 sys 0m30.206s 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.353 ************************************ 00:18:53.353 END TEST nvmf_vfio_user_fuzz 00:18:53.353 ************************************ 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.353 ************************************ 00:18:53.353 START TEST nvmf_auth_target 00:18:53.353 ************************************ 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:53.353 * Looking for test storage... 00:18:53.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:53.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.353 --rc genhtml_branch_coverage=1 00:18:53.353 --rc genhtml_function_coverage=1 00:18:53.353 --rc genhtml_legend=1 00:18:53.353 --rc geninfo_all_blocks=1 00:18:53.353 --rc geninfo_unexecuted_blocks=1 00:18:53.353 00:18:53.353 ' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:53.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.353 --rc genhtml_branch_coverage=1 00:18:53.353 --rc genhtml_function_coverage=1 00:18:53.353 --rc genhtml_legend=1 00:18:53.353 --rc geninfo_all_blocks=1 00:18:53.353 --rc geninfo_unexecuted_blocks=1 00:18:53.353 00:18:53.353 ' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:53.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.353 --rc genhtml_branch_coverage=1 00:18:53.353 --rc genhtml_function_coverage=1 00:18:53.353 --rc genhtml_legend=1 00:18:53.353 --rc geninfo_all_blocks=1 00:18:53.353 --rc geninfo_unexecuted_blocks=1 00:18:53.353 00:18:53.353 ' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:53.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.353 --rc genhtml_branch_coverage=1 00:18:53.353 --rc genhtml_function_coverage=1 00:18:53.353 --rc genhtml_legend=1 00:18:53.353 --rc geninfo_all_blocks=1 00:18:53.353 --rc geninfo_unexecuted_blocks=1 00:18:53.353 00:18:53.353 ' 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.353 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:53.354 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:53.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:53.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:53.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:53.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.921 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.922 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:54.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:18:54.180 00:18:54.180 --- 10.0.0.2 ping statistics --- 00:18:54.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.180 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:54.180 00:18:54.180 --- 10.0.0.1 ping statistics --- 00:18:54.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.180 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3821551 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3821551 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3821551 ']' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.180 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3821575 00:18:54.439 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=27cc6e3e90f5b36cec50ba9dfb1ea994b055085ab939eb44 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.c5V 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 27cc6e3e90f5b36cec50ba9dfb1ea994b055085ab939eb44 0 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 27cc6e3e90f5b36cec50ba9dfb1ea994b055085ab939eb44 0 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=27cc6e3e90f5b36cec50ba9dfb1ea994b055085ab939eb44 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.c5V 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.c5V 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.c5V 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=befe3ea2c455ccfede835a9e7f1cb8d04c117ce425af6e771cfadb99ad0c481a 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7Ts 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key befe3ea2c455ccfede835a9e7f1cb8d04c117ce425af6e771cfadb99ad0c481a 3 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 befe3ea2c455ccfede835a9e7f1cb8d04c117ce425af6e771cfadb99ad0c481a 3 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=befe3ea2c455ccfede835a9e7f1cb8d04c117ce425af6e771cfadb99ad0c481a 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7Ts 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7Ts 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7Ts 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a2d4174a3a8b317941c2a32a86047241 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p9J 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a2d4174a3a8b317941c2a32a86047241 1 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a2d4174a3a8b317941c2a32a86047241 1 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a2d4174a3a8b317941c2a32a86047241 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:54.440 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p9J 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p9J 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.p9J 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=550e16f1f0c372cde997d0fb5368d37fca9e4daeda205abd 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ODc 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 550e16f1f0c372cde997d0fb5368d37fca9e4daeda205abd 2 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 550e16f1f0c372cde997d0fb5368d37fca9e4daeda205abd 2 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=550e16f1f0c372cde997d0fb5368d37fca9e4daeda205abd 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ODc 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ODc 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ODc 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=50826ea6b761dc75f24efbac3c4268a1aac66b34433cf17e 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.raq 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 50826ea6b761dc75f24efbac3c4268a1aac66b34433cf17e 2 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 50826ea6b761dc75f24efbac3c4268a1aac66b34433cf17e 2 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=50826ea6b761dc75f24efbac3c4268a1aac66b34433cf17e 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.raq 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.raq 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.raq 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1c2694e2c07f9880f315bb7394fb7e6d 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sqy 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1c2694e2c07f9880f315bb7394fb7e6d 1 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1c2694e2c07f9880f315bb7394fb7e6d 1 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1c2694e2c07f9880f315bb7394fb7e6d 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:54.699 11:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sqy 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sqy 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.sqy 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:54.699 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1d5fa67bf6ff9f292ca0898142bf1f5e74ae6d8653843a8dd828bcf09103c5f6 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qqJ 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1d5fa67bf6ff9f292ca0898142bf1f5e74ae6d8653843a8dd828bcf09103c5f6 3 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1d5fa67bf6ff9f292ca0898142bf1f5e74ae6d8653843a8dd828bcf09103c5f6 3 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1d5fa67bf6ff9f292ca0898142bf1f5e74ae6d8653843a8dd828bcf09103c5f6 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qqJ 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qqJ 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.qqJ 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3821551 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3821551 ']' 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.700 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3821575 /var/tmp/host.sock 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3821575 ']' 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:54.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.958 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.216 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.216 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:55.216 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:55.216 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.216 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.c5V 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.c5V 00:18:55.474 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.c5V 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7Ts ]] 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Ts 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Ts 00:18:55.732 11:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Ts 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.p9J 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.p9J 00:18:55.989 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.p9J 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ODc ]] 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ODc 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ODc 00:18:56.247 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ODc 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.raq 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.raq 00:18:56.505 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.raq 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.sqy ]] 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sqy 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sqy 00:18:56.763 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sqy 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qqJ 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qqJ 00:18:57.021 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qqJ 00:18:57.279 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:57.279 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:57.279 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.279 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.279 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.279 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.536 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.537 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.103 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.103 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.103 { 00:18:58.103 "cntlid": 1, 00:18:58.104 "qid": 0, 00:18:58.104 "state": "enabled", 00:18:58.104 "thread": "nvmf_tgt_poll_group_000", 00:18:58.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:58.104 "listen_address": { 00:18:58.104 "trtype": "TCP", 00:18:58.104 "adrfam": "IPv4", 00:18:58.104 "traddr": "10.0.0.2", 00:18:58.104 "trsvcid": "4420" 00:18:58.104 }, 00:18:58.104 "peer_address": { 00:18:58.104 "trtype": "TCP", 00:18:58.104 "adrfam": "IPv4", 00:18:58.104 "traddr": "10.0.0.1", 00:18:58.104 "trsvcid": "45862" 00:18:58.104 }, 00:18:58.104 "auth": { 00:18:58.104 "state": "completed", 00:18:58.104 "digest": "sha256", 00:18:58.104 "dhgroup": "null" 00:18:58.104 } 00:18:58.104 } 00:18:58.104 ]' 00:18:58.104 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.362 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.620 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:18:58.620 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.553 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.811 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.068 00:19:00.068 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.068 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.068 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.633 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.633 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.633 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.633 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.633 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.633 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.633 { 00:19:00.633 "cntlid": 3, 00:19:00.633 "qid": 0, 00:19:00.633 "state": "enabled", 00:19:00.633 "thread": "nvmf_tgt_poll_group_000", 00:19:00.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:00.634 "listen_address": { 00:19:00.634 "trtype": "TCP", 00:19:00.634 "adrfam": "IPv4", 00:19:00.634 "traddr": "10.0.0.2", 00:19:00.634 "trsvcid": "4420" 00:19:00.634 }, 00:19:00.634 "peer_address": { 00:19:00.634 "trtype": "TCP", 00:19:00.634 "adrfam": "IPv4", 00:19:00.634 "traddr": "10.0.0.1", 00:19:00.634 "trsvcid": "45878" 00:19:00.634 }, 00:19:00.634 "auth": { 00:19:00.634 "state": "completed", 00:19:00.634 "digest": "sha256", 00:19:00.634 "dhgroup": "null" 00:19:00.634 } 00:19:00.634 } 00:19:00.634 ]' 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.634 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.890 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:00.890 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.822 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.080 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.338 00:19:02.338 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.338 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.338 11:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.903 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.903 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.903 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.903 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.903 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.903 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.903 { 00:19:02.904 "cntlid": 5, 00:19:02.904 "qid": 0, 00:19:02.904 "state": "enabled", 00:19:02.904 "thread": "nvmf_tgt_poll_group_000", 00:19:02.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.904 "listen_address": { 00:19:02.904 "trtype": "TCP", 00:19:02.904 "adrfam": "IPv4", 00:19:02.904 "traddr": "10.0.0.2", 00:19:02.904 "trsvcid": "4420" 00:19:02.904 }, 00:19:02.904 "peer_address": { 00:19:02.904 "trtype": "TCP", 00:19:02.904 "adrfam": "IPv4", 00:19:02.904 "traddr": "10.0.0.1", 00:19:02.904 "trsvcid": "45890" 00:19:02.904 }, 00:19:02.904 "auth": { 00:19:02.904 "state": "completed", 00:19:02.904 "digest": "sha256", 00:19:02.904 "dhgroup": "null" 00:19:02.904 } 00:19:02.904 } 00:19:02.904 ]' 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.904 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.162 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:03.162 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.093 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.351 11:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.917 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.917 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.917 { 00:19:04.917 "cntlid": 7, 00:19:04.917 "qid": 0, 00:19:04.917 "state": "enabled", 00:19:04.917 "thread": "nvmf_tgt_poll_group_000", 00:19:04.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.917 "listen_address": { 00:19:04.917 "trtype": "TCP", 00:19:04.917 "adrfam": "IPv4", 00:19:04.917 "traddr": "10.0.0.2", 00:19:04.917 "trsvcid": "4420" 00:19:04.917 }, 00:19:04.917 "peer_address": { 00:19:04.917 "trtype": "TCP", 00:19:04.917 "adrfam": "IPv4", 00:19:04.917 "traddr": "10.0.0.1", 00:19:04.917 "trsvcid": "45916" 00:19:04.917 }, 00:19:04.917 "auth": { 00:19:04.917 "state": "completed", 00:19:04.917 "digest": "sha256", 00:19:04.917 "dhgroup": "null" 00:19:04.917 } 00:19:04.917 } 00:19:04.917 ]' 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.175 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.433 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:05.433 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.366 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.624 11:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.882 00:19:06.882 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.882 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.882 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.140 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.398 { 00:19:07.398 "cntlid": 9, 00:19:07.398 "qid": 0, 00:19:07.398 "state": "enabled", 00:19:07.398 "thread": "nvmf_tgt_poll_group_000", 00:19:07.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:07.398 "listen_address": { 00:19:07.398 "trtype": "TCP", 00:19:07.398 "adrfam": "IPv4", 00:19:07.398 "traddr": "10.0.0.2", 00:19:07.398 "trsvcid": "4420" 00:19:07.398 }, 00:19:07.398 "peer_address": { 00:19:07.398 "trtype": "TCP", 00:19:07.398 "adrfam": "IPv4", 00:19:07.398 "traddr": "10.0.0.1", 00:19:07.398 "trsvcid": "50662" 00:19:07.398 }, 00:19:07.398 "auth": { 00:19:07.398 "state": "completed", 00:19:07.398 "digest": "sha256", 00:19:07.398 "dhgroup": "ffdhe2048" 00:19:07.398 } 00:19:07.398 } 00:19:07.398 ]' 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.398 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.656 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:07.656 11:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.587 11:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.845 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.411 00:19:09.411 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.411 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.411 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.669 { 00:19:09.669 "cntlid": 11, 00:19:09.669 "qid": 0, 00:19:09.669 "state": "enabled", 00:19:09.669 "thread": "nvmf_tgt_poll_group_000", 00:19:09.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.669 "listen_address": { 00:19:09.669 "trtype": "TCP", 00:19:09.669 "adrfam": "IPv4", 00:19:09.669 "traddr": "10.0.0.2", 00:19:09.669 "trsvcid": "4420" 00:19:09.669 }, 00:19:09.669 "peer_address": { 00:19:09.669 "trtype": "TCP", 00:19:09.669 "adrfam": "IPv4", 00:19:09.669 "traddr": "10.0.0.1", 00:19:09.669 "trsvcid": "50668" 00:19:09.669 }, 00:19:09.669 "auth": { 00:19:09.669 "state": "completed", 00:19:09.669 "digest": "sha256", 00:19:09.669 "dhgroup": "ffdhe2048" 00:19:09.669 } 00:19:09.669 } 00:19:09.669 ]' 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.669 11:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.927 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:09.927 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.859 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.425 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.685 00:19:11.685 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.685 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.685 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.975 { 00:19:11.975 "cntlid": 13, 00:19:11.975 "qid": 0, 00:19:11.975 "state": "enabled", 00:19:11.975 "thread": "nvmf_tgt_poll_group_000", 00:19:11.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:11.975 "listen_address": { 00:19:11.975 "trtype": "TCP", 00:19:11.975 "adrfam": "IPv4", 00:19:11.975 "traddr": "10.0.0.2", 00:19:11.975 "trsvcid": "4420" 00:19:11.975 }, 00:19:11.975 "peer_address": { 00:19:11.975 "trtype": "TCP", 00:19:11.975 "adrfam": "IPv4", 00:19:11.975 "traddr": "10.0.0.1", 00:19:11.975 "trsvcid": "50688" 00:19:11.975 }, 00:19:11.975 "auth": { 00:19:11.975 "state": "completed", 00:19:11.975 "digest": "sha256", 00:19:11.975 "dhgroup": "ffdhe2048" 00:19:11.975 } 00:19:11.975 } 00:19:11.975 ]' 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.975 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.258 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:12.258 11:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.191 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.450 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.016 00:19:14.016 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.016 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.016 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.274 { 00:19:14.274 "cntlid": 15, 00:19:14.274 "qid": 0, 00:19:14.274 "state": "enabled", 00:19:14.274 "thread": "nvmf_tgt_poll_group_000", 00:19:14.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:14.274 "listen_address": { 00:19:14.274 "trtype": "TCP", 00:19:14.274 "adrfam": "IPv4", 00:19:14.274 "traddr": "10.0.0.2", 00:19:14.274 "trsvcid": "4420" 00:19:14.274 }, 00:19:14.274 "peer_address": { 00:19:14.274 "trtype": "TCP", 00:19:14.274 "adrfam": "IPv4", 00:19:14.274 "traddr": "10.0.0.1", 00:19:14.274 "trsvcid": "50718" 00:19:14.274 }, 00:19:14.274 "auth": { 00:19:14.274 "state": "completed", 00:19:14.274 "digest": "sha256", 00:19:14.274 "dhgroup": "ffdhe2048" 00:19:14.274 } 00:19:14.274 } 00:19:14.274 ]' 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.274 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.531 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:14.532 11:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.464 11:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.030 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.288 00:19:16.288 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.288 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.288 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.546 { 00:19:16.546 "cntlid": 17, 00:19:16.546 "qid": 0, 00:19:16.546 "state": "enabled", 00:19:16.546 "thread": "nvmf_tgt_poll_group_000", 00:19:16.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.546 "listen_address": { 00:19:16.546 "trtype": "TCP", 00:19:16.546 "adrfam": "IPv4", 00:19:16.546 "traddr": "10.0.0.2", 00:19:16.546 "trsvcid": "4420" 00:19:16.546 }, 00:19:16.546 "peer_address": { 00:19:16.546 "trtype": "TCP", 00:19:16.546 "adrfam": "IPv4", 00:19:16.546 "traddr": "10.0.0.1", 00:19:16.546 "trsvcid": "58474" 00:19:16.546 }, 00:19:16.546 "auth": { 00:19:16.546 "state": "completed", 00:19:16.546 "digest": "sha256", 00:19:16.546 "dhgroup": "ffdhe3072" 00:19:16.546 } 00:19:16.546 } 00:19:16.546 ]' 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.546 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.804 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:16.804 11:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:17.737 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.995 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.252 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.510 00:19:18.510 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.510 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.510 11:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.768 { 00:19:18.768 "cntlid": 19, 00:19:18.768 "qid": 0, 00:19:18.768 "state": "enabled", 00:19:18.768 "thread": "nvmf_tgt_poll_group_000", 00:19:18.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:18.768 "listen_address": { 00:19:18.768 "trtype": "TCP", 00:19:18.768 "adrfam": "IPv4", 00:19:18.768 "traddr": "10.0.0.2", 00:19:18.768 "trsvcid": "4420" 00:19:18.768 }, 00:19:18.768 "peer_address": { 00:19:18.768 "trtype": "TCP", 00:19:18.768 "adrfam": "IPv4", 00:19:18.768 "traddr": "10.0.0.1", 00:19:18.768 "trsvcid": "58494" 00:19:18.768 }, 00:19:18.768 "auth": { 00:19:18.768 "state": "completed", 00:19:18.768 "digest": "sha256", 00:19:18.768 "dhgroup": "ffdhe3072" 00:19:18.768 } 00:19:18.768 } 00:19:18.768 ]' 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.768 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.026 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.026 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.026 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.283 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:19.283 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.215 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.472 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.473 11:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.730 00:19:20.988 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.988 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.988 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.246 { 00:19:21.246 "cntlid": 21, 00:19:21.246 "qid": 0, 00:19:21.246 "state": "enabled", 00:19:21.246 "thread": "nvmf_tgt_poll_group_000", 00:19:21.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.246 "listen_address": { 00:19:21.246 "trtype": "TCP", 00:19:21.246 "adrfam": "IPv4", 00:19:21.246 "traddr": "10.0.0.2", 00:19:21.246 "trsvcid": "4420" 00:19:21.246 }, 00:19:21.246 "peer_address": { 00:19:21.246 "trtype": "TCP", 00:19:21.246 "adrfam": "IPv4", 00:19:21.246 "traddr": "10.0.0.1", 00:19:21.246 "trsvcid": "58530" 00:19:21.246 }, 00:19:21.246 "auth": { 00:19:21.246 "state": "completed", 00:19:21.246 "digest": "sha256", 00:19:21.246 "dhgroup": "ffdhe3072" 00:19:21.246 } 00:19:21.246 } 00:19:21.246 ]' 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.246 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.247 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.247 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.504 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:21.505 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:22.440 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.440 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.440 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.440 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.698 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.698 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.698 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.698 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.956 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.214 00:19:23.214 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.214 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.214 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.472 { 00:19:23.472 "cntlid": 23, 00:19:23.472 "qid": 0, 00:19:23.472 "state": "enabled", 00:19:23.472 "thread": "nvmf_tgt_poll_group_000", 00:19:23.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.472 "listen_address": { 00:19:23.472 "trtype": "TCP", 00:19:23.472 "adrfam": "IPv4", 00:19:23.472 "traddr": "10.0.0.2", 00:19:23.472 "trsvcid": "4420" 00:19:23.472 }, 00:19:23.472 "peer_address": { 00:19:23.472 "trtype": "TCP", 00:19:23.472 "adrfam": "IPv4", 00:19:23.472 "traddr": "10.0.0.1", 00:19:23.472 "trsvcid": "58562" 00:19:23.472 }, 00:19:23.472 "auth": { 00:19:23.472 "state": "completed", 00:19:23.472 "digest": "sha256", 00:19:23.472 "dhgroup": "ffdhe3072" 00:19:23.472 } 00:19:23.472 } 00:19:23.472 ]' 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.472 11:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.041 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:24.041 11:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.977 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.235 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.493 00:19:25.493 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.493 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.493 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.751 { 00:19:25.751 "cntlid": 25, 00:19:25.751 "qid": 0, 00:19:25.751 "state": "enabled", 00:19:25.751 "thread": "nvmf_tgt_poll_group_000", 00:19:25.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.751 "listen_address": { 00:19:25.751 "trtype": "TCP", 00:19:25.751 "adrfam": "IPv4", 00:19:25.751 "traddr": "10.0.0.2", 00:19:25.751 "trsvcid": "4420" 00:19:25.751 }, 00:19:25.751 "peer_address": { 00:19:25.751 "trtype": "TCP", 00:19:25.751 "adrfam": "IPv4", 00:19:25.751 "traddr": "10.0.0.1", 00:19:25.751 "trsvcid": "53172" 00:19:25.751 }, 00:19:25.751 "auth": { 00:19:25.751 "state": "completed", 00:19:25.751 "digest": "sha256", 00:19:25.751 "dhgroup": "ffdhe4096" 00:19:25.751 } 00:19:25.751 } 00:19:25.751 ]' 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.751 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.009 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.009 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.009 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.009 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.009 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.267 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:26.268 11:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.204 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.461 11:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.030 00:19:28.030 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.030 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.030 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.288 { 00:19:28.288 "cntlid": 27, 00:19:28.288 "qid": 0, 00:19:28.288 "state": "enabled", 00:19:28.288 "thread": "nvmf_tgt_poll_group_000", 00:19:28.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.288 "listen_address": { 00:19:28.288 "trtype": "TCP", 00:19:28.288 "adrfam": "IPv4", 00:19:28.288 "traddr": "10.0.0.2", 00:19:28.288 "trsvcid": "4420" 00:19:28.288 }, 00:19:28.288 "peer_address": { 00:19:28.288 "trtype": "TCP", 00:19:28.288 "adrfam": "IPv4", 00:19:28.288 "traddr": "10.0.0.1", 00:19:28.288 "trsvcid": "53202" 00:19:28.288 }, 00:19:28.288 "auth": { 00:19:28.288 "state": "completed", 00:19:28.288 "digest": "sha256", 00:19:28.288 "dhgroup": "ffdhe4096" 00:19:28.288 } 00:19:28.288 } 00:19:28.288 ]' 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.288 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.547 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:28.547 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.482 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.740 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.998 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.998 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.998 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.998 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.256 00:19:30.256 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.256 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.256 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.514 { 00:19:30.514 "cntlid": 29, 00:19:30.514 "qid": 0, 00:19:30.514 "state": "enabled", 00:19:30.514 "thread": "nvmf_tgt_poll_group_000", 00:19:30.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.514 "listen_address": { 00:19:30.514 "trtype": "TCP", 00:19:30.514 "adrfam": "IPv4", 00:19:30.514 "traddr": "10.0.0.2", 00:19:30.514 "trsvcid": "4420" 00:19:30.514 }, 00:19:30.514 "peer_address": { 00:19:30.514 "trtype": "TCP", 00:19:30.514 "adrfam": "IPv4", 00:19:30.514 "traddr": "10.0.0.1", 00:19:30.514 "trsvcid": "53230" 00:19:30.514 }, 00:19:30.514 "auth": { 00:19:30.514 "state": "completed", 00:19:30.514 "digest": "sha256", 00:19:30.514 "dhgroup": "ffdhe4096" 00:19:30.514 } 00:19:30.514 } 00:19:30.514 ]' 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.514 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.772 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.772 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.772 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.772 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.772 11:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.030 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:31.030 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.967 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.225 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.799 00:19:32.799 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.799 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.799 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.058 { 00:19:33.058 "cntlid": 31, 00:19:33.058 "qid": 0, 00:19:33.058 "state": "enabled", 00:19:33.058 "thread": "nvmf_tgt_poll_group_000", 00:19:33.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.058 "listen_address": { 00:19:33.058 "trtype": "TCP", 00:19:33.058 "adrfam": "IPv4", 00:19:33.058 "traddr": "10.0.0.2", 00:19:33.058 "trsvcid": "4420" 00:19:33.058 }, 00:19:33.058 "peer_address": { 00:19:33.058 "trtype": "TCP", 00:19:33.058 "adrfam": "IPv4", 00:19:33.058 "traddr": "10.0.0.1", 00:19:33.058 "trsvcid": "53274" 00:19:33.058 }, 00:19:33.058 "auth": { 00:19:33.058 "state": "completed", 00:19:33.058 "digest": "sha256", 00:19:33.058 "dhgroup": "ffdhe4096" 00:19:33.058 } 00:19:33.058 } 00:19:33.058 ]' 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.058 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.059 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.059 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.059 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.059 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.059 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.059 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.317 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:33.317 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:34.253 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.253 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.254 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.512 11:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.080 00:19:35.080 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.080 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.080 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.647 { 00:19:35.647 "cntlid": 33, 00:19:35.647 "qid": 0, 00:19:35.647 "state": "enabled", 00:19:35.647 "thread": "nvmf_tgt_poll_group_000", 00:19:35.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.647 "listen_address": { 00:19:35.647 "trtype": "TCP", 00:19:35.647 "adrfam": "IPv4", 00:19:35.647 "traddr": "10.0.0.2", 00:19:35.647 "trsvcid": "4420" 00:19:35.647 }, 00:19:35.647 "peer_address": { 00:19:35.647 "trtype": "TCP", 00:19:35.647 "adrfam": "IPv4", 00:19:35.647 "traddr": "10.0.0.1", 00:19:35.647 "trsvcid": "43778" 00:19:35.647 }, 00:19:35.647 "auth": { 00:19:35.647 "state": "completed", 00:19:35.647 "digest": "sha256", 00:19:35.647 "dhgroup": "ffdhe6144" 00:19:35.647 } 00:19:35.647 } 00:19:35.647 ]' 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.647 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.905 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:35.905 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.840 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.098 11:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.666 00:19:37.666 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.666 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.667 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.926 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.185 { 00:19:38.185 "cntlid": 35, 00:19:38.185 "qid": 0, 00:19:38.185 "state": "enabled", 00:19:38.185 "thread": "nvmf_tgt_poll_group_000", 00:19:38.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.185 "listen_address": { 00:19:38.185 "trtype": "TCP", 00:19:38.185 "adrfam": "IPv4", 00:19:38.185 "traddr": "10.0.0.2", 00:19:38.185 "trsvcid": "4420" 00:19:38.185 }, 00:19:38.185 "peer_address": { 00:19:38.185 "trtype": "TCP", 00:19:38.185 "adrfam": "IPv4", 00:19:38.185 "traddr": "10.0.0.1", 00:19:38.185 "trsvcid": "43810" 00:19:38.185 }, 00:19:38.185 "auth": { 00:19:38.185 "state": "completed", 00:19:38.185 "digest": "sha256", 00:19:38.185 "dhgroup": "ffdhe6144" 00:19:38.185 } 00:19:38.185 } 00:19:38.185 ]' 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.185 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.443 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:38.443 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.386 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.645 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.211 00:19:40.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.211 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.469 { 00:19:40.469 "cntlid": 37, 00:19:40.469 "qid": 0, 00:19:40.469 "state": "enabled", 00:19:40.469 "thread": "nvmf_tgt_poll_group_000", 00:19:40.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.469 "listen_address": { 00:19:40.469 "trtype": "TCP", 00:19:40.469 "adrfam": "IPv4", 00:19:40.469 "traddr": "10.0.0.2", 00:19:40.469 "trsvcid": "4420" 00:19:40.469 }, 00:19:40.469 "peer_address": { 00:19:40.469 "trtype": "TCP", 00:19:40.469 "adrfam": "IPv4", 00:19:40.469 "traddr": "10.0.0.1", 00:19:40.469 "trsvcid": "43838" 00:19:40.469 }, 00:19:40.469 "auth": { 00:19:40.469 "state": "completed", 00:19:40.469 "digest": "sha256", 00:19:40.469 "dhgroup": "ffdhe6144" 00:19:40.469 } 00:19:40.469 } 00:19:40.469 ]' 00:19:40.469 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.727 11:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.985 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:40.985 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.016 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.300 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.867 00:19:42.867 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.867 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.867 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.125 { 00:19:43.125 "cntlid": 39, 00:19:43.125 "qid": 0, 00:19:43.125 "state": "enabled", 00:19:43.125 "thread": "nvmf_tgt_poll_group_000", 00:19:43.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.125 "listen_address": { 00:19:43.125 "trtype": "TCP", 00:19:43.125 "adrfam": "IPv4", 00:19:43.125 "traddr": "10.0.0.2", 00:19:43.125 "trsvcid": "4420" 00:19:43.125 }, 00:19:43.125 "peer_address": { 00:19:43.125 "trtype": "TCP", 00:19:43.125 "adrfam": "IPv4", 00:19:43.125 "traddr": "10.0.0.1", 00:19:43.125 "trsvcid": "43864" 00:19:43.125 }, 00:19:43.125 "auth": { 00:19:43.125 "state": "completed", 00:19:43.125 "digest": "sha256", 00:19:43.125 "dhgroup": "ffdhe6144" 00:19:43.125 } 00:19:43.125 } 00:19:43.125 ]' 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.125 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.384 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:43.384 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:44.760 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.760 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.760 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.760 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.761 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.761 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.761 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.761 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.761 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.761 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.699 00:19:45.699 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.699 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.699 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.957 { 00:19:45.957 "cntlid": 41, 00:19:45.957 "qid": 0, 00:19:45.957 "state": "enabled", 00:19:45.957 "thread": "nvmf_tgt_poll_group_000", 00:19:45.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.957 "listen_address": { 00:19:45.957 "trtype": "TCP", 00:19:45.957 "adrfam": "IPv4", 00:19:45.957 "traddr": "10.0.0.2", 00:19:45.957 "trsvcid": "4420" 00:19:45.957 }, 00:19:45.957 "peer_address": { 00:19:45.957 "trtype": "TCP", 00:19:45.957 "adrfam": "IPv4", 00:19:45.957 "traddr": "10.0.0.1", 00:19:45.957 "trsvcid": "51888" 00:19:45.957 }, 00:19:45.957 "auth": { 00:19:45.957 "state": "completed", 00:19:45.957 "digest": "sha256", 00:19:45.957 "dhgroup": "ffdhe8192" 00:19:45.957 } 00:19:45.957 } 00:19:45.957 ]' 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.957 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.526 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:46.526 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:47.463 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.464 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.397 00:19:48.397 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.397 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.397 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.655 { 00:19:48.655 "cntlid": 43, 00:19:48.655 "qid": 0, 00:19:48.655 "state": "enabled", 00:19:48.655 "thread": "nvmf_tgt_poll_group_000", 00:19:48.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.655 "listen_address": { 00:19:48.655 "trtype": "TCP", 00:19:48.655 "adrfam": "IPv4", 00:19:48.655 "traddr": "10.0.0.2", 00:19:48.655 "trsvcid": "4420" 00:19:48.655 }, 00:19:48.655 "peer_address": { 00:19:48.655 "trtype": "TCP", 00:19:48.655 "adrfam": "IPv4", 00:19:48.655 "traddr": "10.0.0.1", 00:19:48.655 "trsvcid": "51910" 00:19:48.655 }, 00:19:48.655 "auth": { 00:19:48.655 "state": "completed", 00:19:48.655 "digest": "sha256", 00:19:48.655 "dhgroup": "ffdhe8192" 00:19:48.655 } 00:19:48.655 } 00:19:48.655 ]' 00:19:48.655 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.914 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.172 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:49.172 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.108 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.366 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:50.366 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.366 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.366 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.366 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.366 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.367 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.302 00:19:51.303 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.303 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.303 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.561 { 00:19:51.561 "cntlid": 45, 00:19:51.561 "qid": 0, 00:19:51.561 "state": "enabled", 00:19:51.561 "thread": "nvmf_tgt_poll_group_000", 00:19:51.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.561 "listen_address": { 00:19:51.561 "trtype": "TCP", 00:19:51.561 "adrfam": "IPv4", 00:19:51.561 "traddr": "10.0.0.2", 00:19:51.561 "trsvcid": "4420" 00:19:51.561 }, 00:19:51.561 "peer_address": { 00:19:51.561 "trtype": "TCP", 00:19:51.561 "adrfam": "IPv4", 00:19:51.561 "traddr": "10.0.0.1", 00:19:51.561 "trsvcid": "51936" 00:19:51.561 }, 00:19:51.561 "auth": { 00:19:51.561 "state": "completed", 00:19:51.561 "digest": "sha256", 00:19:51.561 "dhgroup": "ffdhe8192" 00:19:51.561 } 00:19:51.561 } 00:19:51.561 ]' 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.561 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.819 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.819 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.819 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.819 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.819 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.077 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:52.077 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.010 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.269 11:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.208 00:19:54.208 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.208 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.208 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.466 { 00:19:54.466 "cntlid": 47, 00:19:54.466 "qid": 0, 00:19:54.466 "state": "enabled", 00:19:54.466 "thread": "nvmf_tgt_poll_group_000", 00:19:54.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.466 "listen_address": { 00:19:54.466 "trtype": "TCP", 00:19:54.466 "adrfam": "IPv4", 00:19:54.466 "traddr": "10.0.0.2", 00:19:54.466 "trsvcid": "4420" 00:19:54.466 }, 00:19:54.466 "peer_address": { 00:19:54.466 "trtype": "TCP", 00:19:54.466 "adrfam": "IPv4", 00:19:54.466 "traddr": "10.0.0.1", 00:19:54.466 "trsvcid": "51974" 00:19:54.466 }, 00:19:54.466 "auth": { 00:19:54.466 "state": "completed", 00:19:54.466 "digest": "sha256", 00:19:54.466 "dhgroup": "ffdhe8192" 00:19:54.466 } 00:19:54.466 } 00:19:54.466 ]' 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.466 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.724 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.724 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.724 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.724 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.724 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.983 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:54.983 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.923 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.181 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.439 00:19:56.439 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.439 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.439 11:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.697 { 00:19:56.697 "cntlid": 49, 00:19:56.697 "qid": 0, 00:19:56.697 "state": "enabled", 00:19:56.697 "thread": "nvmf_tgt_poll_group_000", 00:19:56.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.697 "listen_address": { 00:19:56.697 "trtype": "TCP", 00:19:56.697 "adrfam": "IPv4", 00:19:56.697 "traddr": "10.0.0.2", 00:19:56.697 "trsvcid": "4420" 00:19:56.697 }, 00:19:56.697 "peer_address": { 00:19:56.697 "trtype": "TCP", 00:19:56.697 "adrfam": "IPv4", 00:19:56.697 "traddr": "10.0.0.1", 00:19:56.697 "trsvcid": "47054" 00:19:56.697 }, 00:19:56.697 "auth": { 00:19:56.697 "state": "completed", 00:19:56.697 "digest": "sha384", 00:19:56.697 "dhgroup": "null" 00:19:56.697 } 00:19:56.697 } 00:19:56.697 ]' 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.697 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.955 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.955 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.955 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.955 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.955 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.213 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:57.213 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.150 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.409 11:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.975 00:19:58.975 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.975 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.975 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.234 { 00:19:59.234 "cntlid": 51, 00:19:59.234 "qid": 0, 00:19:59.234 "state": "enabled", 00:19:59.234 "thread": "nvmf_tgt_poll_group_000", 00:19:59.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.234 "listen_address": { 00:19:59.234 "trtype": "TCP", 00:19:59.234 "adrfam": "IPv4", 00:19:59.234 "traddr": "10.0.0.2", 00:19:59.234 "trsvcid": "4420" 00:19:59.234 }, 00:19:59.234 "peer_address": { 00:19:59.234 "trtype": "TCP", 00:19:59.234 "adrfam": "IPv4", 00:19:59.234 "traddr": "10.0.0.1", 00:19:59.234 "trsvcid": "47076" 00:19:59.234 }, 00:19:59.234 "auth": { 00:19:59.234 "state": "completed", 00:19:59.234 "digest": "sha384", 00:19:59.234 "dhgroup": "null" 00:19:59.234 } 00:19:59.234 } 00:19:59.234 ]' 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.234 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.492 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:19:59.492 11:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.428 11:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.994 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.252 00:20:01.252 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.253 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.253 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.514 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.514 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.514 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.514 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.514 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.514 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.514 { 00:20:01.514 "cntlid": 53, 00:20:01.514 "qid": 0, 00:20:01.514 "state": "enabled", 00:20:01.514 "thread": "nvmf_tgt_poll_group_000", 00:20:01.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.514 "listen_address": { 00:20:01.514 "trtype": "TCP", 00:20:01.514 "adrfam": "IPv4", 00:20:01.514 "traddr": "10.0.0.2", 00:20:01.514 "trsvcid": "4420" 00:20:01.514 }, 00:20:01.514 "peer_address": { 00:20:01.514 "trtype": "TCP", 00:20:01.514 "adrfam": "IPv4", 00:20:01.514 "traddr": "10.0.0.1", 00:20:01.514 "trsvcid": "47104" 00:20:01.515 }, 00:20:01.515 "auth": { 00:20:01.515 "state": "completed", 00:20:01.515 "digest": "sha384", 00:20:01.515 "dhgroup": "null" 00:20:01.515 } 00:20:01.515 } 00:20:01.515 ]' 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.515 11:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.773 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:01.773 11:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:03.146 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.146 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.146 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.146 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.147 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.406 00:20:03.665 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.665 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.665 11:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.923 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.924 { 00:20:03.924 "cntlid": 55, 00:20:03.924 "qid": 0, 00:20:03.924 "state": "enabled", 00:20:03.924 "thread": "nvmf_tgt_poll_group_000", 00:20:03.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.924 "listen_address": { 00:20:03.924 "trtype": "TCP", 00:20:03.924 "adrfam": "IPv4", 00:20:03.924 "traddr": "10.0.0.2", 00:20:03.924 "trsvcid": "4420" 00:20:03.924 }, 00:20:03.924 "peer_address": { 00:20:03.924 "trtype": "TCP", 00:20:03.924 "adrfam": "IPv4", 00:20:03.924 "traddr": "10.0.0.1", 00:20:03.924 "trsvcid": "47132" 00:20:03.924 }, 00:20:03.924 "auth": { 00:20:03.924 "state": "completed", 00:20:03.924 "digest": "sha384", 00:20:03.924 "dhgroup": "null" 00:20:03.924 } 00:20:03.924 } 00:20:03.924 ]' 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.924 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.183 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:04.183 11:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:05.118 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.118 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.118 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.118 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.377 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.378 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.378 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.378 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.378 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.637 11:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.895 00:20:05.895 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.895 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.895 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.153 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.153 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.153 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.153 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.153 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.154 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.154 { 00:20:06.154 "cntlid": 57, 00:20:06.154 "qid": 0, 00:20:06.154 "state": "enabled", 00:20:06.154 "thread": "nvmf_tgt_poll_group_000", 00:20:06.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.154 "listen_address": { 00:20:06.154 "trtype": "TCP", 00:20:06.154 "adrfam": "IPv4", 00:20:06.154 "traddr": "10.0.0.2", 00:20:06.154 "trsvcid": "4420" 00:20:06.154 }, 00:20:06.154 "peer_address": { 00:20:06.154 "trtype": "TCP", 00:20:06.154 "adrfam": "IPv4", 00:20:06.154 "traddr": "10.0.0.1", 00:20:06.154 "trsvcid": "51644" 00:20:06.154 }, 00:20:06.154 "auth": { 00:20:06.154 "state": "completed", 00:20:06.154 "digest": "sha384", 00:20:06.154 "dhgroup": "ffdhe2048" 00:20:06.154 } 00:20:06.154 } 00:20:06.154 ]' 00:20:06.154 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.154 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.154 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.154 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.154 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.412 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.412 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.412 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.670 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:06.670 11:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.615 11:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.873 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.441 00:20:08.441 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.441 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.441 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.441 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.442 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.442 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.442 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.442 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.442 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.442 { 00:20:08.442 "cntlid": 59, 00:20:08.442 "qid": 0, 00:20:08.442 "state": "enabled", 00:20:08.442 "thread": "nvmf_tgt_poll_group_000", 00:20:08.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.442 "listen_address": { 00:20:08.442 "trtype": "TCP", 00:20:08.442 "adrfam": "IPv4", 00:20:08.442 "traddr": "10.0.0.2", 00:20:08.442 "trsvcid": "4420" 00:20:08.442 }, 00:20:08.442 "peer_address": { 00:20:08.442 "trtype": "TCP", 00:20:08.442 "adrfam": "IPv4", 00:20:08.442 "traddr": "10.0.0.1", 00:20:08.442 "trsvcid": "51674" 00:20:08.442 }, 00:20:08.442 "auth": { 00:20:08.442 "state": "completed", 00:20:08.442 "digest": "sha384", 00:20:08.442 "dhgroup": "ffdhe2048" 00:20:08.442 } 00:20:08.442 } 00:20:08.442 ]' 00:20:08.442 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.701 11:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.961 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:08.961 11:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.897 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.156 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.722 00:20:10.722 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.722 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.722 11:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.981 { 00:20:10.981 "cntlid": 61, 00:20:10.981 "qid": 0, 00:20:10.981 "state": "enabled", 00:20:10.981 "thread": "nvmf_tgt_poll_group_000", 00:20:10.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.981 "listen_address": { 00:20:10.981 "trtype": "TCP", 00:20:10.981 "adrfam": "IPv4", 00:20:10.981 "traddr": "10.0.0.2", 00:20:10.981 "trsvcid": "4420" 00:20:10.981 }, 00:20:10.981 "peer_address": { 00:20:10.981 "trtype": "TCP", 00:20:10.981 "adrfam": "IPv4", 00:20:10.981 "traddr": "10.0.0.1", 00:20:10.981 "trsvcid": "51714" 00:20:10.981 }, 00:20:10.981 "auth": { 00:20:10.981 "state": "completed", 00:20:10.981 "digest": "sha384", 00:20:10.981 "dhgroup": "ffdhe2048" 00:20:10.981 } 00:20:10.981 } 00:20:10.981 ]' 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.981 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.241 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:11.241 11:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.651 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.652 11:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.939 00:20:13.199 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.199 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.199 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.459 { 00:20:13.459 "cntlid": 63, 00:20:13.459 "qid": 0, 00:20:13.459 "state": "enabled", 00:20:13.459 "thread": "nvmf_tgt_poll_group_000", 00:20:13.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.459 "listen_address": { 00:20:13.459 "trtype": "TCP", 00:20:13.459 "adrfam": "IPv4", 00:20:13.459 "traddr": "10.0.0.2", 00:20:13.459 "trsvcid": "4420" 00:20:13.459 }, 00:20:13.459 "peer_address": { 00:20:13.459 "trtype": "TCP", 00:20:13.459 "adrfam": "IPv4", 00:20:13.459 "traddr": "10.0.0.1", 00:20:13.459 "trsvcid": "51742" 00:20:13.459 }, 00:20:13.459 "auth": { 00:20:13.459 "state": "completed", 00:20:13.459 "digest": "sha384", 00:20:13.459 "dhgroup": "ffdhe2048" 00:20:13.459 } 00:20:13.459 } 00:20:13.459 ]' 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.459 11:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.718 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:13.718 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.652 11:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.910 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.475 00:20:15.475 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.475 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.475 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.733 { 00:20:15.733 "cntlid": 65, 00:20:15.733 "qid": 0, 00:20:15.733 "state": "enabled", 00:20:15.733 "thread": "nvmf_tgt_poll_group_000", 00:20:15.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.733 "listen_address": { 00:20:15.733 "trtype": "TCP", 00:20:15.733 "adrfam": "IPv4", 00:20:15.733 "traddr": "10.0.0.2", 00:20:15.733 "trsvcid": "4420" 00:20:15.733 }, 00:20:15.733 "peer_address": { 00:20:15.733 "trtype": "TCP", 00:20:15.733 "adrfam": "IPv4", 00:20:15.733 "traddr": "10.0.0.1", 00:20:15.733 "trsvcid": "42630" 00:20:15.733 }, 00:20:15.733 "auth": { 00:20:15.733 "state": "completed", 00:20:15.733 "digest": "sha384", 00:20:15.733 "dhgroup": "ffdhe3072" 00:20:15.733 } 00:20:15.733 } 00:20:15.733 ]' 00:20:15.733 11:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.733 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.991 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:15.991 11:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.365 11:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.931 00:20:17.931 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.931 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.931 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.931 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.189 { 00:20:18.189 "cntlid": 67, 00:20:18.189 "qid": 0, 00:20:18.189 "state": "enabled", 00:20:18.189 "thread": "nvmf_tgt_poll_group_000", 00:20:18.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.189 "listen_address": { 00:20:18.189 "trtype": "TCP", 00:20:18.189 "adrfam": "IPv4", 00:20:18.189 "traddr": "10.0.0.2", 00:20:18.189 "trsvcid": "4420" 00:20:18.189 }, 00:20:18.189 "peer_address": { 00:20:18.189 "trtype": "TCP", 00:20:18.189 "adrfam": "IPv4", 00:20:18.189 "traddr": "10.0.0.1", 00:20:18.189 "trsvcid": "42648" 00:20:18.189 }, 00:20:18.189 "auth": { 00:20:18.189 "state": "completed", 00:20:18.189 "digest": "sha384", 00:20:18.189 "dhgroup": "ffdhe3072" 00:20:18.189 } 00:20:18.189 } 00:20:18.189 ]' 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.189 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.447 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:18.447 11:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.380 11:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.638 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.895 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.895 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.895 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.895 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.153 00:20:20.153 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.153 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.153 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.411 { 00:20:20.411 "cntlid": 69, 00:20:20.411 "qid": 0, 00:20:20.411 "state": "enabled", 00:20:20.411 "thread": "nvmf_tgt_poll_group_000", 00:20:20.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.411 "listen_address": { 00:20:20.411 "trtype": "TCP", 00:20:20.411 "adrfam": "IPv4", 00:20:20.411 "traddr": "10.0.0.2", 00:20:20.411 "trsvcid": "4420" 00:20:20.411 }, 00:20:20.411 "peer_address": { 00:20:20.411 "trtype": "TCP", 00:20:20.411 "adrfam": "IPv4", 00:20:20.411 "traddr": "10.0.0.1", 00:20:20.411 "trsvcid": "42660" 00:20:20.411 }, 00:20:20.411 "auth": { 00:20:20.411 "state": "completed", 00:20:20.411 "digest": "sha384", 00:20:20.411 "dhgroup": "ffdhe3072" 00:20:20.411 } 00:20:20.411 } 00:20:20.411 ]' 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.411 11:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.977 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:20.977 11:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:21.910 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.911 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.169 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.169 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.169 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.169 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.427 00:20:22.427 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.427 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.427 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.684 { 00:20:22.684 "cntlid": 71, 00:20:22.684 "qid": 0, 00:20:22.684 "state": "enabled", 00:20:22.684 "thread": "nvmf_tgt_poll_group_000", 00:20:22.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.684 "listen_address": { 00:20:22.684 "trtype": "TCP", 00:20:22.684 "adrfam": "IPv4", 00:20:22.684 "traddr": "10.0.0.2", 00:20:22.684 "trsvcid": "4420" 00:20:22.684 }, 00:20:22.684 "peer_address": { 00:20:22.684 "trtype": "TCP", 00:20:22.684 "adrfam": "IPv4", 00:20:22.684 "traddr": "10.0.0.1", 00:20:22.684 "trsvcid": "42686" 00:20:22.684 }, 00:20:22.684 "auth": { 00:20:22.684 "state": "completed", 00:20:22.684 "digest": "sha384", 00:20:22.684 "dhgroup": "ffdhe3072" 00:20:22.684 } 00:20:22.684 } 00:20:22.684 ]' 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.684 11:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.684 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.684 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.684 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.684 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.684 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.248 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:23.248 11:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.180 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.438 11:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.696 00:20:24.696 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.696 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.696 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.954 { 00:20:24.954 "cntlid": 73, 00:20:24.954 "qid": 0, 00:20:24.954 "state": "enabled", 00:20:24.954 "thread": "nvmf_tgt_poll_group_000", 00:20:24.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.954 "listen_address": { 00:20:24.954 "trtype": "TCP", 00:20:24.954 "adrfam": "IPv4", 00:20:24.954 "traddr": "10.0.0.2", 00:20:24.954 "trsvcid": "4420" 00:20:24.954 }, 00:20:24.954 "peer_address": { 00:20:24.954 "trtype": "TCP", 00:20:24.954 "adrfam": "IPv4", 00:20:24.954 "traddr": "10.0.0.1", 00:20:24.954 "trsvcid": "42700" 00:20:24.954 }, 00:20:24.954 "auth": { 00:20:24.954 "state": "completed", 00:20:24.954 "digest": "sha384", 00:20:24.954 "dhgroup": "ffdhe4096" 00:20:24.954 } 00:20:24.954 } 00:20:24.954 ]' 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.954 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.211 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.211 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.211 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.211 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.211 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.469 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:25.469 11:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.403 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.661 11:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.227 00:20:27.227 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.227 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.227 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.485 { 00:20:27.485 "cntlid": 75, 00:20:27.485 "qid": 0, 00:20:27.485 "state": "enabled", 00:20:27.485 "thread": "nvmf_tgt_poll_group_000", 00:20:27.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.485 "listen_address": { 00:20:27.485 "trtype": "TCP", 00:20:27.485 "adrfam": "IPv4", 00:20:27.485 "traddr": "10.0.0.2", 00:20:27.485 "trsvcid": "4420" 00:20:27.485 }, 00:20:27.485 "peer_address": { 00:20:27.485 "trtype": "TCP", 00:20:27.485 "adrfam": "IPv4", 00:20:27.485 "traddr": "10.0.0.1", 00:20:27.485 "trsvcid": "55094" 00:20:27.485 }, 00:20:27.485 "auth": { 00:20:27.485 "state": "completed", 00:20:27.485 "digest": "sha384", 00:20:27.485 "dhgroup": "ffdhe4096" 00:20:27.485 } 00:20:27.485 } 00:20:27.485 ]' 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.485 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.743 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:27.743 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.677 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.243 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.500 00:20:29.500 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.500 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.500 11:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.758 { 00:20:29.758 "cntlid": 77, 00:20:29.758 "qid": 0, 00:20:29.758 "state": "enabled", 00:20:29.758 "thread": "nvmf_tgt_poll_group_000", 00:20:29.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.758 "listen_address": { 00:20:29.758 "trtype": "TCP", 00:20:29.758 "adrfam": "IPv4", 00:20:29.758 "traddr": "10.0.0.2", 00:20:29.758 "trsvcid": "4420" 00:20:29.758 }, 00:20:29.758 "peer_address": { 00:20:29.758 "trtype": "TCP", 00:20:29.758 "adrfam": "IPv4", 00:20:29.758 "traddr": "10.0.0.1", 00:20:29.758 "trsvcid": "55120" 00:20:29.758 }, 00:20:29.758 "auth": { 00:20:29.758 "state": "completed", 00:20:29.758 "digest": "sha384", 00:20:29.758 "dhgroup": "ffdhe4096" 00:20:29.758 } 00:20:29.758 } 00:20:29.758 ]' 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.758 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.016 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.016 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.016 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.016 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.016 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.274 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:30.274 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.206 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.464 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.029 00:20:32.029 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.029 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.029 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.287 { 00:20:32.287 "cntlid": 79, 00:20:32.287 "qid": 0, 00:20:32.287 "state": "enabled", 00:20:32.287 "thread": "nvmf_tgt_poll_group_000", 00:20:32.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.287 "listen_address": { 00:20:32.287 "trtype": "TCP", 00:20:32.287 "adrfam": "IPv4", 00:20:32.287 "traddr": "10.0.0.2", 00:20:32.287 "trsvcid": "4420" 00:20:32.287 }, 00:20:32.287 "peer_address": { 00:20:32.287 "trtype": "TCP", 00:20:32.287 "adrfam": "IPv4", 00:20:32.287 "traddr": "10.0.0.1", 00:20:32.287 "trsvcid": "55148" 00:20:32.287 }, 00:20:32.287 "auth": { 00:20:32.287 "state": "completed", 00:20:32.287 "digest": "sha384", 00:20:32.287 "dhgroup": "ffdhe4096" 00:20:32.287 } 00:20:32.287 } 00:20:32.287 ]' 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.287 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.544 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:32.544 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.478 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.736 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.302 00:20:34.302 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.302 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.302 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.560 { 00:20:34.560 "cntlid": 81, 00:20:34.560 "qid": 0, 00:20:34.560 "state": "enabled", 00:20:34.560 "thread": "nvmf_tgt_poll_group_000", 00:20:34.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.560 "listen_address": { 00:20:34.560 "trtype": "TCP", 00:20:34.560 "adrfam": "IPv4", 00:20:34.560 "traddr": "10.0.0.2", 00:20:34.560 "trsvcid": "4420" 00:20:34.560 }, 00:20:34.560 "peer_address": { 00:20:34.560 "trtype": "TCP", 00:20:34.560 "adrfam": "IPv4", 00:20:34.560 "traddr": "10.0.0.1", 00:20:34.560 "trsvcid": "55176" 00:20:34.560 }, 00:20:34.560 "auth": { 00:20:34.560 "state": "completed", 00:20:34.560 "digest": "sha384", 00:20:34.560 "dhgroup": "ffdhe6144" 00:20:34.560 } 00:20:34.560 } 00:20:34.560 ]' 00:20:34.560 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.818 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.818 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.818 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.818 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.818 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.818 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.818 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.108 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:35.108 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.068 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.325 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.890 00:20:36.890 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.890 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.890 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.456 { 00:20:37.456 "cntlid": 83, 00:20:37.456 "qid": 0, 00:20:37.456 "state": "enabled", 00:20:37.456 "thread": "nvmf_tgt_poll_group_000", 00:20:37.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.456 "listen_address": { 00:20:37.456 "trtype": "TCP", 00:20:37.456 "adrfam": "IPv4", 00:20:37.456 "traddr": "10.0.0.2", 00:20:37.456 "trsvcid": "4420" 00:20:37.456 }, 00:20:37.456 "peer_address": { 00:20:37.456 "trtype": "TCP", 00:20:37.456 "adrfam": "IPv4", 00:20:37.456 "traddr": "10.0.0.1", 00:20:37.456 "trsvcid": "47068" 00:20:37.456 }, 00:20:37.456 "auth": { 00:20:37.456 "state": "completed", 00:20:37.456 "digest": "sha384", 00:20:37.456 "dhgroup": "ffdhe6144" 00:20:37.456 } 00:20:37.456 } 00:20:37.456 ]' 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.456 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.715 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:37.715 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:38.647 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.647 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.647 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.647 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.647 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.647 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.647 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.647 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.213 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.779 00:20:39.779 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.779 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.779 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.037 { 00:20:40.037 "cntlid": 85, 00:20:40.037 "qid": 0, 00:20:40.037 "state": "enabled", 00:20:40.037 "thread": "nvmf_tgt_poll_group_000", 00:20:40.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.037 "listen_address": { 00:20:40.037 "trtype": "TCP", 00:20:40.037 "adrfam": "IPv4", 00:20:40.037 "traddr": "10.0.0.2", 00:20:40.037 "trsvcid": "4420" 00:20:40.037 }, 00:20:40.037 "peer_address": { 00:20:40.037 "trtype": "TCP", 00:20:40.037 "adrfam": "IPv4", 00:20:40.037 "traddr": "10.0.0.1", 00:20:40.037 "trsvcid": "47096" 00:20:40.037 }, 00:20:40.037 "auth": { 00:20:40.037 "state": "completed", 00:20:40.037 "digest": "sha384", 00:20:40.037 "dhgroup": "ffdhe6144" 00:20:40.037 } 00:20:40.037 } 00:20:40.037 ]' 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.037 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.603 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:40.603 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.545 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.112 00:20:42.370 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.370 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.370 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.626 { 00:20:42.626 "cntlid": 87, 00:20:42.626 "qid": 0, 00:20:42.626 "state": "enabled", 00:20:42.626 "thread": "nvmf_tgt_poll_group_000", 00:20:42.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.626 "listen_address": { 00:20:42.626 "trtype": "TCP", 00:20:42.626 "adrfam": "IPv4", 00:20:42.626 "traddr": "10.0.0.2", 00:20:42.626 "trsvcid": "4420" 00:20:42.626 }, 00:20:42.626 "peer_address": { 00:20:42.626 "trtype": "TCP", 00:20:42.626 "adrfam": "IPv4", 00:20:42.626 "traddr": "10.0.0.1", 00:20:42.626 "trsvcid": "47122" 00:20:42.626 }, 00:20:42.626 "auth": { 00:20:42.626 "state": "completed", 00:20:42.626 "digest": "sha384", 00:20:42.626 "dhgroup": "ffdhe6144" 00:20:42.626 } 00:20:42.626 } 00:20:42.626 ]' 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.626 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.627 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.627 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.627 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.627 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.627 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.627 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.884 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:42.884 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.818 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.384 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.318 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.318 { 00:20:45.318 "cntlid": 89, 00:20:45.318 "qid": 0, 00:20:45.318 "state": "enabled", 00:20:45.318 "thread": "nvmf_tgt_poll_group_000", 00:20:45.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.318 "listen_address": { 00:20:45.318 "trtype": "TCP", 00:20:45.318 "adrfam": "IPv4", 00:20:45.318 "traddr": "10.0.0.2", 00:20:45.318 "trsvcid": "4420" 00:20:45.318 }, 00:20:45.318 "peer_address": { 00:20:45.318 "trtype": "TCP", 00:20:45.318 "adrfam": "IPv4", 00:20:45.318 "traddr": "10.0.0.1", 00:20:45.318 "trsvcid": "47156" 00:20:45.318 }, 00:20:45.318 "auth": { 00:20:45.318 "state": "completed", 00:20:45.318 "digest": "sha384", 00:20:45.318 "dhgroup": "ffdhe8192" 00:20:45.318 } 00:20:45.318 } 00:20:45.318 ]' 00:20:45.318 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.576 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.834 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:45.834 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.768 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.026 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.960 00:20:47.960 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.960 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.960 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.218 { 00:20:48.218 "cntlid": 91, 00:20:48.218 "qid": 0, 00:20:48.218 "state": "enabled", 00:20:48.218 "thread": "nvmf_tgt_poll_group_000", 00:20:48.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.218 "listen_address": { 00:20:48.218 "trtype": "TCP", 00:20:48.218 "adrfam": "IPv4", 00:20:48.218 "traddr": "10.0.0.2", 00:20:48.218 "trsvcid": "4420" 00:20:48.218 }, 00:20:48.218 "peer_address": { 00:20:48.218 "trtype": "TCP", 00:20:48.218 "adrfam": "IPv4", 00:20:48.218 "traddr": "10.0.0.1", 00:20:48.218 "trsvcid": "47332" 00:20:48.218 }, 00:20:48.218 "auth": { 00:20:48.218 "state": "completed", 00:20:48.218 "digest": "sha384", 00:20:48.218 "dhgroup": "ffdhe8192" 00:20:48.218 } 00:20:48.218 } 00:20:48.218 ]' 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.218 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.476 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.476 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.476 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.476 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.476 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.733 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:48.733 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.667 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.925 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.858 00:20:50.858 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.858 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.858 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.117 { 00:20:51.117 "cntlid": 93, 00:20:51.117 "qid": 0, 00:20:51.117 "state": "enabled", 00:20:51.117 "thread": "nvmf_tgt_poll_group_000", 00:20:51.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.117 "listen_address": { 00:20:51.117 "trtype": "TCP", 00:20:51.117 "adrfam": "IPv4", 00:20:51.117 "traddr": "10.0.0.2", 00:20:51.117 "trsvcid": "4420" 00:20:51.117 }, 00:20:51.117 "peer_address": { 00:20:51.117 "trtype": "TCP", 00:20:51.117 "adrfam": "IPv4", 00:20:51.117 "traddr": "10.0.0.1", 00:20:51.117 "trsvcid": "47354" 00:20:51.117 }, 00:20:51.117 "auth": { 00:20:51.117 "state": "completed", 00:20:51.117 "digest": "sha384", 00:20:51.117 "dhgroup": "ffdhe8192" 00:20:51.117 } 00:20:51.117 } 00:20:51.117 ]' 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.117 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.375 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.375 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.375 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.632 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:51.632 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.566 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.823 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.824 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.756 00:20:53.756 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.756 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.756 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.014 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.014 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.014 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.014 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.014 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.014 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.014 { 00:20:54.014 "cntlid": 95, 00:20:54.014 "qid": 0, 00:20:54.014 "state": "enabled", 00:20:54.014 "thread": "nvmf_tgt_poll_group_000", 00:20:54.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.014 "listen_address": { 00:20:54.014 "trtype": "TCP", 00:20:54.014 "adrfam": "IPv4", 00:20:54.014 "traddr": "10.0.0.2", 00:20:54.014 "trsvcid": "4420" 00:20:54.014 }, 00:20:54.014 "peer_address": { 00:20:54.014 "trtype": "TCP", 00:20:54.014 "adrfam": "IPv4", 00:20:54.014 "traddr": "10.0.0.1", 00:20:54.014 "trsvcid": "47382" 00:20:54.014 }, 00:20:54.014 "auth": { 00:20:54.014 "state": "completed", 00:20:54.014 "digest": "sha384", 00:20:54.014 "dhgroup": "ffdhe8192" 00:20:54.015 } 00:20:54.015 } 00:20:54.015 ]' 00:20:54.015 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.015 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.015 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.015 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.015 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.272 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.272 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.272 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.530 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:54.530 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.463 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.721 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.286 00:20:56.286 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.286 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.286 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.544 { 00:20:56.544 "cntlid": 97, 00:20:56.544 "qid": 0, 00:20:56.544 "state": "enabled", 00:20:56.544 "thread": "nvmf_tgt_poll_group_000", 00:20:56.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.544 "listen_address": { 00:20:56.544 "trtype": "TCP", 00:20:56.544 "adrfam": "IPv4", 00:20:56.544 "traddr": "10.0.0.2", 00:20:56.544 "trsvcid": "4420" 00:20:56.544 }, 00:20:56.544 "peer_address": { 00:20:56.544 "trtype": "TCP", 00:20:56.544 "adrfam": "IPv4", 00:20:56.544 "traddr": "10.0.0.1", 00:20:56.544 "trsvcid": "38798" 00:20:56.544 }, 00:20:56.544 "auth": { 00:20:56.544 "state": "completed", 00:20:56.544 "digest": "sha512", 00:20:56.544 "dhgroup": "null" 00:20:56.544 } 00:20:56.544 } 00:20:56.544 ]' 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.544 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.802 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:56.802 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:20:57.736 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.736 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.736 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.736 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.993 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.993 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.993 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.993 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.251 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.509 00:20:58.509 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.509 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.509 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.768 { 00:20:58.768 "cntlid": 99, 00:20:58.768 "qid": 0, 00:20:58.768 "state": "enabled", 00:20:58.768 "thread": "nvmf_tgt_poll_group_000", 00:20:58.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.768 "listen_address": { 00:20:58.768 "trtype": "TCP", 00:20:58.768 "adrfam": "IPv4", 00:20:58.768 "traddr": "10.0.0.2", 00:20:58.768 "trsvcid": "4420" 00:20:58.768 }, 00:20:58.768 "peer_address": { 00:20:58.768 "trtype": "TCP", 00:20:58.768 "adrfam": "IPv4", 00:20:58.768 "traddr": "10.0.0.1", 00:20:58.768 "trsvcid": "38818" 00:20:58.768 }, 00:20:58.768 "auth": { 00:20:58.768 "state": "completed", 00:20:58.768 "digest": "sha512", 00:20:58.768 "dhgroup": "null" 00:20:58.768 } 00:20:58.768 } 00:20:58.768 ]' 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.768 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.334 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:20:59.334 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.268 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.527 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.785 00:21:00.785 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.785 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.785 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.043 { 00:21:01.043 "cntlid": 101, 00:21:01.043 "qid": 0, 00:21:01.043 "state": "enabled", 00:21:01.043 "thread": "nvmf_tgt_poll_group_000", 00:21:01.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.043 "listen_address": { 00:21:01.043 "trtype": "TCP", 00:21:01.043 "adrfam": "IPv4", 00:21:01.043 "traddr": "10.0.0.2", 00:21:01.043 "trsvcid": "4420" 00:21:01.043 }, 00:21:01.043 "peer_address": { 00:21:01.043 "trtype": "TCP", 00:21:01.043 "adrfam": "IPv4", 00:21:01.043 "traddr": "10.0.0.1", 00:21:01.043 "trsvcid": "38840" 00:21:01.043 }, 00:21:01.043 "auth": { 00:21:01.043 "state": "completed", 00:21:01.043 "digest": "sha512", 00:21:01.043 "dhgroup": "null" 00:21:01.043 } 00:21:01.043 } 00:21:01.043 ]' 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.043 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.309 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.309 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.309 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.309 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.309 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.633 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:01.633 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:02.566 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.566 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.566 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.567 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.567 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.567 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.567 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.567 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.824 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.083 00:21:03.083 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.083 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.083 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.341 { 00:21:03.341 "cntlid": 103, 00:21:03.341 "qid": 0, 00:21:03.341 "state": "enabled", 00:21:03.341 "thread": "nvmf_tgt_poll_group_000", 00:21:03.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.341 "listen_address": { 00:21:03.341 "trtype": "TCP", 00:21:03.341 "adrfam": "IPv4", 00:21:03.341 "traddr": "10.0.0.2", 00:21:03.341 "trsvcid": "4420" 00:21:03.341 }, 00:21:03.341 "peer_address": { 00:21:03.341 "trtype": "TCP", 00:21:03.341 "adrfam": "IPv4", 00:21:03.341 "traddr": "10.0.0.1", 00:21:03.341 "trsvcid": "38870" 00:21:03.341 }, 00:21:03.341 "auth": { 00:21:03.341 "state": "completed", 00:21:03.341 "digest": "sha512", 00:21:03.341 "dhgroup": "null" 00:21:03.341 } 00:21:03.341 } 00:21:03.341 ]' 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.341 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.599 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.599 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.599 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.599 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.599 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.857 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:03.857 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.789 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.047 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.613 00:21:05.613 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.613 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.613 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.613 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.871 { 00:21:05.871 "cntlid": 105, 00:21:05.871 "qid": 0, 00:21:05.871 "state": "enabled", 00:21:05.871 "thread": "nvmf_tgt_poll_group_000", 00:21:05.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.871 "listen_address": { 00:21:05.871 "trtype": "TCP", 00:21:05.871 "adrfam": "IPv4", 00:21:05.871 "traddr": "10.0.0.2", 00:21:05.871 "trsvcid": "4420" 00:21:05.871 }, 00:21:05.871 "peer_address": { 00:21:05.871 "trtype": "TCP", 00:21:05.871 "adrfam": "IPv4", 00:21:05.871 "traddr": "10.0.0.1", 00:21:05.871 "trsvcid": "45240" 00:21:05.871 }, 00:21:05.871 "auth": { 00:21:05.871 "state": "completed", 00:21:05.871 "digest": "sha512", 00:21:05.871 "dhgroup": "ffdhe2048" 00:21:05.871 } 00:21:05.871 } 00:21:05.871 ]' 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.871 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.129 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:06.129 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.063 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.321 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.887 00:21:07.887 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.887 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.887 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.145 { 00:21:08.145 "cntlid": 107, 00:21:08.145 "qid": 0, 00:21:08.145 "state": "enabled", 00:21:08.145 "thread": "nvmf_tgt_poll_group_000", 00:21:08.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.145 "listen_address": { 00:21:08.145 "trtype": "TCP", 00:21:08.145 "adrfam": "IPv4", 00:21:08.145 "traddr": "10.0.0.2", 00:21:08.145 "trsvcid": "4420" 00:21:08.145 }, 00:21:08.145 "peer_address": { 00:21:08.145 "trtype": "TCP", 00:21:08.145 "adrfam": "IPv4", 00:21:08.145 "traddr": "10.0.0.1", 00:21:08.145 "trsvcid": "45258" 00:21:08.145 }, 00:21:08.145 "auth": { 00:21:08.145 "state": "completed", 00:21:08.145 "digest": "sha512", 00:21:08.145 "dhgroup": "ffdhe2048" 00:21:08.145 } 00:21:08.145 } 00:21:08.145 ]' 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.145 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.403 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:08.403 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:09.342 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.342 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.342 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.342 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.602 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.602 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.602 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.602 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.860 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.118 00:21:10.118 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.118 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.118 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.376 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.376 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.376 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.376 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.376 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.376 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.376 { 00:21:10.376 "cntlid": 109, 00:21:10.376 "qid": 0, 00:21:10.376 "state": "enabled", 00:21:10.376 "thread": "nvmf_tgt_poll_group_000", 00:21:10.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.376 "listen_address": { 00:21:10.376 "trtype": "TCP", 00:21:10.376 "adrfam": "IPv4", 00:21:10.376 "traddr": "10.0.0.2", 00:21:10.376 "trsvcid": "4420" 00:21:10.376 }, 00:21:10.376 "peer_address": { 00:21:10.376 "trtype": "TCP", 00:21:10.376 "adrfam": "IPv4", 00:21:10.376 "traddr": "10.0.0.1", 00:21:10.376 "trsvcid": "45284" 00:21:10.376 }, 00:21:10.376 "auth": { 00:21:10.376 "state": "completed", 00:21:10.376 "digest": "sha512", 00:21:10.376 "dhgroup": "ffdhe2048" 00:21:10.376 } 00:21:10.376 } 00:21:10.376 ]' 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.377 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.944 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:10.944 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:11.877 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.877 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.135 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.136 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.136 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.136 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.136 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.393 00:21:12.393 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.393 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.393 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.652 { 00:21:12.652 "cntlid": 111, 00:21:12.652 "qid": 0, 00:21:12.652 "state": "enabled", 00:21:12.652 "thread": "nvmf_tgt_poll_group_000", 00:21:12.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.652 "listen_address": { 00:21:12.652 "trtype": "TCP", 00:21:12.652 "adrfam": "IPv4", 00:21:12.652 "traddr": "10.0.0.2", 00:21:12.652 "trsvcid": "4420" 00:21:12.652 }, 00:21:12.652 "peer_address": { 00:21:12.652 "trtype": "TCP", 00:21:12.652 "adrfam": "IPv4", 00:21:12.652 "traddr": "10.0.0.1", 00:21:12.652 "trsvcid": "45300" 00:21:12.652 }, 00:21:12.652 "auth": { 00:21:12.652 "state": "completed", 00:21:12.652 "digest": "sha512", 00:21:12.652 "dhgroup": "ffdhe2048" 00:21:12.652 } 00:21:12.652 } 00:21:12.652 ]' 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.652 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.652 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.652 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.652 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.652 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.652 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.218 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:13.218 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.151 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.409 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.667 00:21:14.667 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.667 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.667 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.925 { 00:21:14.925 "cntlid": 113, 00:21:14.925 "qid": 0, 00:21:14.925 "state": "enabled", 00:21:14.925 "thread": "nvmf_tgt_poll_group_000", 00:21:14.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.925 "listen_address": { 00:21:14.925 "trtype": "TCP", 00:21:14.925 "adrfam": "IPv4", 00:21:14.925 "traddr": "10.0.0.2", 00:21:14.925 "trsvcid": "4420" 00:21:14.925 }, 00:21:14.925 "peer_address": { 00:21:14.925 "trtype": "TCP", 00:21:14.925 "adrfam": "IPv4", 00:21:14.925 "traddr": "10.0.0.1", 00:21:14.925 "trsvcid": "45316" 00:21:14.925 }, 00:21:14.925 "auth": { 00:21:14.925 "state": "completed", 00:21:14.925 "digest": "sha512", 00:21:14.925 "dhgroup": "ffdhe3072" 00:21:14.925 } 00:21:14.925 } 00:21:14.925 ]' 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.925 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.183 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.183 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.183 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.183 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.183 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.441 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:15.441 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.375 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.633 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.891 00:21:16.891 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.891 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.891 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.149 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.149 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.149 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.149 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.408 { 00:21:17.408 "cntlid": 115, 00:21:17.408 "qid": 0, 00:21:17.408 "state": "enabled", 00:21:17.408 "thread": "nvmf_tgt_poll_group_000", 00:21:17.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.408 "listen_address": { 00:21:17.408 "trtype": "TCP", 00:21:17.408 "adrfam": "IPv4", 00:21:17.408 "traddr": "10.0.0.2", 00:21:17.408 "trsvcid": "4420" 00:21:17.408 }, 00:21:17.408 "peer_address": { 00:21:17.408 "trtype": "TCP", 00:21:17.408 "adrfam": "IPv4", 00:21:17.408 "traddr": "10.0.0.1", 00:21:17.408 "trsvcid": "54928" 00:21:17.408 }, 00:21:17.408 "auth": { 00:21:17.408 "state": "completed", 00:21:17.408 "digest": "sha512", 00:21:17.408 "dhgroup": "ffdhe3072" 00:21:17.408 } 00:21:17.408 } 00:21:17.408 ]' 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.408 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.666 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:17.666 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.599 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.857 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.423 00:21:19.423 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.423 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.423 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.681 { 00:21:19.681 "cntlid": 117, 00:21:19.681 "qid": 0, 00:21:19.681 "state": "enabled", 00:21:19.681 "thread": "nvmf_tgt_poll_group_000", 00:21:19.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.681 "listen_address": { 00:21:19.681 "trtype": "TCP", 00:21:19.681 "adrfam": "IPv4", 00:21:19.681 "traddr": "10.0.0.2", 00:21:19.681 "trsvcid": "4420" 00:21:19.681 }, 00:21:19.681 "peer_address": { 00:21:19.681 "trtype": "TCP", 00:21:19.681 "adrfam": "IPv4", 00:21:19.681 "traddr": "10.0.0.1", 00:21:19.681 "trsvcid": "54952" 00:21:19.681 }, 00:21:19.681 "auth": { 00:21:19.681 "state": "completed", 00:21:19.681 "digest": "sha512", 00:21:19.681 "dhgroup": "ffdhe3072" 00:21:19.681 } 00:21:19.681 } 00:21:19.681 ]' 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.681 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.939 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:19.939 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.313 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.572 00:21:21.572 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.572 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.572 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.830 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.830 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.830 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.830 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.088 { 00:21:22.088 "cntlid": 119, 00:21:22.088 "qid": 0, 00:21:22.088 "state": "enabled", 00:21:22.088 "thread": "nvmf_tgt_poll_group_000", 00:21:22.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.088 "listen_address": { 00:21:22.088 "trtype": "TCP", 00:21:22.088 "adrfam": "IPv4", 00:21:22.088 "traddr": "10.0.0.2", 00:21:22.088 "trsvcid": "4420" 00:21:22.088 }, 00:21:22.088 "peer_address": { 00:21:22.088 "trtype": "TCP", 00:21:22.088 "adrfam": "IPv4", 00:21:22.088 "traddr": "10.0.0.1", 00:21:22.088 "trsvcid": "54984" 00:21:22.088 }, 00:21:22.088 "auth": { 00:21:22.088 "state": "completed", 00:21:22.088 "digest": "sha512", 00:21:22.088 "dhgroup": "ffdhe3072" 00:21:22.088 } 00:21:22.088 } 00:21:22.088 ]' 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.088 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.346 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:22.346 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:23.277 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.277 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.277 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.277 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.278 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.278 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.278 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.278 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.278 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.536 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.101 00:21:24.101 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.101 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.101 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.359 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.359 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.359 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.359 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.359 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.359 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.360 { 00:21:24.360 "cntlid": 121, 00:21:24.360 "qid": 0, 00:21:24.360 "state": "enabled", 00:21:24.360 "thread": "nvmf_tgt_poll_group_000", 00:21:24.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.360 "listen_address": { 00:21:24.360 "trtype": "TCP", 00:21:24.360 "adrfam": "IPv4", 00:21:24.360 "traddr": "10.0.0.2", 00:21:24.360 "trsvcid": "4420" 00:21:24.360 }, 00:21:24.360 "peer_address": { 00:21:24.360 "trtype": "TCP", 00:21:24.360 "adrfam": "IPv4", 00:21:24.360 "traddr": "10.0.0.1", 00:21:24.360 "trsvcid": "55026" 00:21:24.360 }, 00:21:24.360 "auth": { 00:21:24.360 "state": "completed", 00:21:24.360 "digest": "sha512", 00:21:24.360 "dhgroup": "ffdhe4096" 00:21:24.360 } 00:21:24.360 } 00:21:24.360 ]' 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.360 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.647 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:24.647 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.581 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.180 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.462 00:21:26.462 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.462 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.462 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.720 { 00:21:26.720 "cntlid": 123, 00:21:26.720 "qid": 0, 00:21:26.720 "state": "enabled", 00:21:26.720 "thread": "nvmf_tgt_poll_group_000", 00:21:26.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.720 "listen_address": { 00:21:26.720 "trtype": "TCP", 00:21:26.720 "adrfam": "IPv4", 00:21:26.720 "traddr": "10.0.0.2", 00:21:26.720 "trsvcid": "4420" 00:21:26.720 }, 00:21:26.720 "peer_address": { 00:21:26.720 "trtype": "TCP", 00:21:26.720 "adrfam": "IPv4", 00:21:26.720 "traddr": "10.0.0.1", 00:21:26.720 "trsvcid": "53738" 00:21:26.720 }, 00:21:26.720 "auth": { 00:21:26.720 "state": "completed", 00:21:26.720 "digest": "sha512", 00:21:26.720 "dhgroup": "ffdhe4096" 00:21:26.720 } 00:21:26.720 } 00:21:26.720 ]' 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.720 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.720 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.720 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.720 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.720 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.720 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.978 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:26.978 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:27.911 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.911 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.911 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.169 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.169 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.169 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.169 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.169 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.428 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.994 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.994 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.252 { 00:21:29.252 "cntlid": 125, 00:21:29.252 "qid": 0, 00:21:29.252 "state": "enabled", 00:21:29.252 "thread": "nvmf_tgt_poll_group_000", 00:21:29.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.252 "listen_address": { 00:21:29.252 "trtype": "TCP", 00:21:29.252 "adrfam": "IPv4", 00:21:29.252 "traddr": "10.0.0.2", 00:21:29.252 "trsvcid": "4420" 00:21:29.252 }, 00:21:29.252 "peer_address": { 00:21:29.252 "trtype": "TCP", 00:21:29.252 "adrfam": "IPv4", 00:21:29.252 "traddr": "10.0.0.1", 00:21:29.252 "trsvcid": "53766" 00:21:29.252 }, 00:21:29.252 "auth": { 00:21:29.252 "state": "completed", 00:21:29.252 "digest": "sha512", 00:21:29.252 "dhgroup": "ffdhe4096" 00:21:29.252 } 00:21:29.252 } 00:21:29.252 ]' 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.252 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.509 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:29.509 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.442 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.700 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.266 00:21:31.266 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.266 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.266 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.524 { 00:21:31.524 "cntlid": 127, 00:21:31.524 "qid": 0, 00:21:31.524 "state": "enabled", 00:21:31.524 "thread": "nvmf_tgt_poll_group_000", 00:21:31.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.524 "listen_address": { 00:21:31.524 "trtype": "TCP", 00:21:31.524 "adrfam": "IPv4", 00:21:31.524 "traddr": "10.0.0.2", 00:21:31.524 "trsvcid": "4420" 00:21:31.524 }, 00:21:31.524 "peer_address": { 00:21:31.524 "trtype": "TCP", 00:21:31.524 "adrfam": "IPv4", 00:21:31.524 "traddr": "10.0.0.1", 00:21:31.524 "trsvcid": "53804" 00:21:31.524 }, 00:21:31.524 "auth": { 00:21:31.524 "state": "completed", 00:21:31.524 "digest": "sha512", 00:21:31.524 "dhgroup": "ffdhe4096" 00:21:31.524 } 00:21:31.524 } 00:21:31.524 ]' 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.524 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.782 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:31.782 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:32.715 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.715 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.715 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.715 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.973 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.973 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.973 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.973 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.973 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.231 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.796 00:21:33.796 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.796 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.796 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.054 { 00:21:34.054 "cntlid": 129, 00:21:34.054 "qid": 0, 00:21:34.054 "state": "enabled", 00:21:34.054 "thread": "nvmf_tgt_poll_group_000", 00:21:34.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.054 "listen_address": { 00:21:34.054 "trtype": "TCP", 00:21:34.054 "adrfam": "IPv4", 00:21:34.054 "traddr": "10.0.0.2", 00:21:34.054 "trsvcid": "4420" 00:21:34.054 }, 00:21:34.054 "peer_address": { 00:21:34.054 "trtype": "TCP", 00:21:34.054 "adrfam": "IPv4", 00:21:34.054 "traddr": "10.0.0.1", 00:21:34.054 "trsvcid": "53838" 00:21:34.054 }, 00:21:34.054 "auth": { 00:21:34.054 "state": "completed", 00:21:34.054 "digest": "sha512", 00:21:34.054 "dhgroup": "ffdhe6144" 00:21:34.054 } 00:21:34.054 } 00:21:34.054 ]' 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.054 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.620 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:34.620 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.552 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.809 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.374 00:21:36.374 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.374 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.374 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.632 { 00:21:36.632 "cntlid": 131, 00:21:36.632 "qid": 0, 00:21:36.632 "state": "enabled", 00:21:36.632 "thread": "nvmf_tgt_poll_group_000", 00:21:36.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.632 "listen_address": { 00:21:36.632 "trtype": "TCP", 00:21:36.632 "adrfam": "IPv4", 00:21:36.632 "traddr": "10.0.0.2", 00:21:36.632 "trsvcid": "4420" 00:21:36.632 }, 00:21:36.632 "peer_address": { 00:21:36.632 "trtype": "TCP", 00:21:36.632 "adrfam": "IPv4", 00:21:36.632 "traddr": "10.0.0.1", 00:21:36.632 "trsvcid": "46884" 00:21:36.632 }, 00:21:36.632 "auth": { 00:21:36.632 "state": "completed", 00:21:36.632 "digest": "sha512", 00:21:36.632 "dhgroup": "ffdhe6144" 00:21:36.632 } 00:21:36.632 } 00:21:36.632 ]' 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.632 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.632 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.632 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.890 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.890 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.891 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.148 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:37.148 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.081 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.339 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.904 00:21:38.904 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.904 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.904 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.161 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.161 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.161 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.161 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.418 { 00:21:39.418 "cntlid": 133, 00:21:39.418 "qid": 0, 00:21:39.418 "state": "enabled", 00:21:39.418 "thread": "nvmf_tgt_poll_group_000", 00:21:39.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.418 "listen_address": { 00:21:39.418 "trtype": "TCP", 00:21:39.418 "adrfam": "IPv4", 00:21:39.418 "traddr": "10.0.0.2", 00:21:39.418 "trsvcid": "4420" 00:21:39.418 }, 00:21:39.418 "peer_address": { 00:21:39.418 "trtype": "TCP", 00:21:39.418 "adrfam": "IPv4", 00:21:39.418 "traddr": "10.0.0.1", 00:21:39.418 "trsvcid": "46922" 00:21:39.418 }, 00:21:39.418 "auth": { 00:21:39.418 "state": "completed", 00:21:39.418 "digest": "sha512", 00:21:39.418 "dhgroup": "ffdhe6144" 00:21:39.418 } 00:21:39.418 } 00:21:39.418 ]' 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.418 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.675 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:39.675 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.609 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.866 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:40.866 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.866 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.866 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.866 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.866 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.867 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.799 00:21:41.799 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.799 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.799 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.799 { 00:21:41.799 "cntlid": 135, 00:21:41.799 "qid": 0, 00:21:41.799 "state": "enabled", 00:21:41.799 "thread": "nvmf_tgt_poll_group_000", 00:21:41.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.799 "listen_address": { 00:21:41.799 "trtype": "TCP", 00:21:41.799 "adrfam": "IPv4", 00:21:41.799 "traddr": "10.0.0.2", 00:21:41.799 "trsvcid": "4420" 00:21:41.799 }, 00:21:41.799 "peer_address": { 00:21:41.799 "trtype": "TCP", 00:21:41.799 "adrfam": "IPv4", 00:21:41.799 "traddr": "10.0.0.1", 00:21:41.799 "trsvcid": "46948" 00:21:41.799 }, 00:21:41.799 "auth": { 00:21:41.799 "state": "completed", 00:21:41.799 "digest": "sha512", 00:21:41.799 "dhgroup": "ffdhe6144" 00:21:41.799 } 00:21:41.799 } 00:21:41.799 ]' 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.799 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.057 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.057 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.057 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.057 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.057 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.314 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:42.314 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.247 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.504 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.436 00:21:44.436 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.436 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.436 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.693 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.951 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.951 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.951 { 00:21:44.951 "cntlid": 137, 00:21:44.951 "qid": 0, 00:21:44.951 "state": "enabled", 00:21:44.951 "thread": "nvmf_tgt_poll_group_000", 00:21:44.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.951 "listen_address": { 00:21:44.951 "trtype": "TCP", 00:21:44.951 "adrfam": "IPv4", 00:21:44.951 "traddr": "10.0.0.2", 00:21:44.951 "trsvcid": "4420" 00:21:44.951 }, 00:21:44.951 "peer_address": { 00:21:44.951 "trtype": "TCP", 00:21:44.951 "adrfam": "IPv4", 00:21:44.951 "traddr": "10.0.0.1", 00:21:44.951 "trsvcid": "46974" 00:21:44.951 }, 00:21:44.951 "auth": { 00:21:44.951 "state": "completed", 00:21:44.951 "digest": "sha512", 00:21:44.951 "dhgroup": "ffdhe8192" 00:21:44.951 } 00:21:44.951 } 00:21:44.952 ]' 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.952 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.209 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:45.209 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.142 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.400 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.333 00:21:47.333 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.333 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.333 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.591 { 00:21:47.591 "cntlid": 139, 00:21:47.591 "qid": 0, 00:21:47.591 "state": "enabled", 00:21:47.591 "thread": "nvmf_tgt_poll_group_000", 00:21:47.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.591 "listen_address": { 00:21:47.591 "trtype": "TCP", 00:21:47.591 "adrfam": "IPv4", 00:21:47.591 "traddr": "10.0.0.2", 00:21:47.591 "trsvcid": "4420" 00:21:47.591 }, 00:21:47.591 "peer_address": { 00:21:47.591 "trtype": "TCP", 00:21:47.591 "adrfam": "IPv4", 00:21:47.591 "traddr": "10.0.0.1", 00:21:47.591 "trsvcid": "59354" 00:21:47.591 }, 00:21:47.591 "auth": { 00:21:47.591 "state": "completed", 00:21:47.591 "digest": "sha512", 00:21:47.591 "dhgroup": "ffdhe8192" 00:21:47.591 } 00:21:47.591 } 00:21:47.591 ]' 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.591 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.849 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.849 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.849 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.107 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:48.107 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: --dhchap-ctrl-secret DHHC-1:02:NTUwZTE2ZjFmMGMzNzJjZGU5OTdkMGZiNTM2OGQzN2ZjYTllNGRhZWRhMjA1YWJkR4NUhw==: 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.040 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.298 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.230 00:21:50.230 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.230 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.230 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.499 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.499 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.499 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.500 { 00:21:50.500 "cntlid": 141, 00:21:50.500 "qid": 0, 00:21:50.500 "state": "enabled", 00:21:50.500 "thread": "nvmf_tgt_poll_group_000", 00:21:50.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.500 "listen_address": { 00:21:50.500 "trtype": "TCP", 00:21:50.500 "adrfam": "IPv4", 00:21:50.500 "traddr": "10.0.0.2", 00:21:50.500 "trsvcid": "4420" 00:21:50.500 }, 00:21:50.500 "peer_address": { 00:21:50.500 "trtype": "TCP", 00:21:50.500 "adrfam": "IPv4", 00:21:50.500 "traddr": "10.0.0.1", 00:21:50.500 "trsvcid": "59384" 00:21:50.500 }, 00:21:50.500 "auth": { 00:21:50.500 "state": "completed", 00:21:50.500 "digest": "sha512", 00:21:50.500 "dhgroup": "ffdhe8192" 00:21:50.500 } 00:21:50.500 } 00:21:50.500 ]' 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.500 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.767 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:50.767 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:01:MWMyNjk0ZTJjMDdmOTg4MGYzMTViYjczOTRmYjdlNmRN/1BA: 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.778 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.055 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.988 00:21:52.988 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.988 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.988 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.246 { 00:21:53.246 "cntlid": 143, 00:21:53.246 "qid": 0, 00:21:53.246 "state": "enabled", 00:21:53.246 "thread": "nvmf_tgt_poll_group_000", 00:21:53.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.246 "listen_address": { 00:21:53.246 "trtype": "TCP", 00:21:53.246 "adrfam": "IPv4", 00:21:53.246 "traddr": "10.0.0.2", 00:21:53.246 "trsvcid": "4420" 00:21:53.246 }, 00:21:53.246 "peer_address": { 00:21:53.246 "trtype": "TCP", 00:21:53.246 "adrfam": "IPv4", 00:21:53.246 "traddr": "10.0.0.1", 00:21:53.246 "trsvcid": "59416" 00:21:53.246 }, 00:21:53.246 "auth": { 00:21:53.246 "state": "completed", 00:21:53.246 "digest": "sha512", 00:21:53.246 "dhgroup": "ffdhe8192" 00:21:53.246 } 00:21:53.246 } 00:21:53.246 ]' 00:21:53.246 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.504 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.762 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:53.762 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.695 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.953 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.954 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.887 00:21:55.887 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.887 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.887 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.145 { 00:21:56.145 "cntlid": 145, 00:21:56.145 "qid": 0, 00:21:56.145 "state": "enabled", 00:21:56.145 "thread": "nvmf_tgt_poll_group_000", 00:21:56.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.145 "listen_address": { 00:21:56.145 "trtype": "TCP", 00:21:56.145 "adrfam": "IPv4", 00:21:56.145 "traddr": "10.0.0.2", 00:21:56.145 "trsvcid": "4420" 00:21:56.145 }, 00:21:56.145 "peer_address": { 00:21:56.145 "trtype": "TCP", 00:21:56.145 "adrfam": "IPv4", 00:21:56.145 "traddr": "10.0.0.1", 00:21:56.145 "trsvcid": "37348" 00:21:56.145 }, 00:21:56.145 "auth": { 00:21:56.145 "state": "completed", 00:21:56.145 "digest": "sha512", 00:21:56.145 "dhgroup": "ffdhe8192" 00:21:56.145 } 00:21:56.145 } 00:21:56.145 ]' 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.145 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.710 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:56.710 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MjdjYzZlM2U5MGY1YjM2Y2VjNTBiYTlkZmIxZWE5OTRiMDU1MDg1YWI5MzllYjQ0Z8wjlA==: --dhchap-ctrl-secret DHHC-1:03:YmVmZTNlYTJjNDU1Y2NmZWRlODM1YTllN2YxY2I4ZDA0YzExN2NlNDI1YWY2ZTc3MWNmYWRiOTlhZDBjNDgxYf8D4oQ=: 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:57.643 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:58.609 request: 00:21:58.609 { 00:21:58.609 "name": "nvme0", 00:21:58.609 "trtype": "tcp", 00:21:58.609 "traddr": "10.0.0.2", 00:21:58.609 "adrfam": "ipv4", 00:21:58.609 "trsvcid": "4420", 00:21:58.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.609 "prchk_reftag": false, 00:21:58.609 "prchk_guard": false, 00:21:58.609 "hdgst": false, 00:21:58.609 "ddgst": false, 00:21:58.609 "dhchap_key": "key2", 00:21:58.609 "allow_unrecognized_csi": false, 00:21:58.609 "method": "bdev_nvme_attach_controller", 00:21:58.609 "req_id": 1 00:21:58.609 } 00:21:58.609 Got JSON-RPC error response 00:21:58.609 response: 00:21:58.609 { 00:21:58.609 "code": -5, 00:21:58.609 "message": "Input/output error" 00:21:58.609 } 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.609 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:59.175 request: 00:21:59.175 { 00:21:59.175 "name": "nvme0", 00:21:59.175 "trtype": "tcp", 00:21:59.175 "traddr": "10.0.0.2", 00:21:59.175 "adrfam": "ipv4", 00:21:59.175 "trsvcid": "4420", 00:21:59.175 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:59.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.175 "prchk_reftag": false, 00:21:59.175 "prchk_guard": false, 00:21:59.175 "hdgst": false, 00:21:59.175 "ddgst": false, 00:21:59.175 "dhchap_key": "key1", 00:21:59.175 "dhchap_ctrlr_key": "ckey2", 00:21:59.175 "allow_unrecognized_csi": false, 00:21:59.175 "method": "bdev_nvme_attach_controller", 00:21:59.175 "req_id": 1 00:21:59.175 } 00:21:59.175 Got JSON-RPC error response 00:21:59.175 response: 00:21:59.175 { 00:21:59.175 "code": -5, 00:21:59.175 "message": "Input/output error" 00:21:59.175 } 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.175 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.107 request: 00:22:00.107 { 00:22:00.107 "name": "nvme0", 00:22:00.107 "trtype": "tcp", 00:22:00.107 "traddr": "10.0.0.2", 00:22:00.107 "adrfam": "ipv4", 00:22:00.107 "trsvcid": "4420", 00:22:00.107 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:00.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.107 "prchk_reftag": false, 00:22:00.107 "prchk_guard": false, 00:22:00.107 "hdgst": false, 00:22:00.107 "ddgst": false, 00:22:00.107 "dhchap_key": "key1", 00:22:00.107 "dhchap_ctrlr_key": "ckey1", 00:22:00.107 "allow_unrecognized_csi": false, 00:22:00.107 "method": "bdev_nvme_attach_controller", 00:22:00.107 "req_id": 1 00:22:00.107 } 00:22:00.107 Got JSON-RPC error response 00:22:00.107 response: 00:22:00.107 { 00:22:00.107 "code": -5, 00:22:00.107 "message": "Input/output error" 00:22:00.107 } 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3821551 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3821551 ']' 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3821551 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3821551 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3821551' 00:22:00.107 killing process with pid 3821551 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3821551 00:22:00.107 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3821551 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3844755 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3844755 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3844755 ']' 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:00.365 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3844755 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3844755 ']' 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:00.623 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.881 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.881 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:00.881 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:00.881 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.881 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 null0 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.c5V 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7Ts ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Ts 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.p9J 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ODc ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ODc 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.raq 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.sqy ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sqy 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qqJ 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.140 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.038 nvme0n1 00:22:03.038 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.038 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.038 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.038 { 00:22:03.038 "cntlid": 1, 00:22:03.038 "qid": 0, 00:22:03.038 "state": "enabled", 00:22:03.038 "thread": "nvmf_tgt_poll_group_000", 00:22:03.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.038 "listen_address": { 00:22:03.038 "trtype": "TCP", 00:22:03.038 "adrfam": "IPv4", 00:22:03.038 "traddr": "10.0.0.2", 00:22:03.038 "trsvcid": "4420" 00:22:03.038 }, 00:22:03.038 "peer_address": { 00:22:03.038 "trtype": "TCP", 00:22:03.038 "adrfam": "IPv4", 00:22:03.038 "traddr": "10.0.0.1", 00:22:03.038 "trsvcid": "37376" 00:22:03.038 }, 00:22:03.038 "auth": { 00:22:03.038 "state": "completed", 00:22:03.038 "digest": "sha512", 00:22:03.038 "dhgroup": "ffdhe8192" 00:22:03.038 } 00:22:03.038 } 00:22:03.038 ]' 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.038 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.296 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:22:03.296 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:22:04.228 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.228 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.228 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.228 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.485 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.486 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:04.486 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.486 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.486 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.486 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:04.486 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.743 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.001 request: 00:22:05.001 { 00:22:05.001 "name": "nvme0", 00:22:05.001 "trtype": "tcp", 00:22:05.001 "traddr": "10.0.0.2", 00:22:05.001 "adrfam": "ipv4", 00:22:05.001 "trsvcid": "4420", 00:22:05.001 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.001 "prchk_reftag": false, 00:22:05.001 "prchk_guard": false, 00:22:05.001 "hdgst": false, 00:22:05.001 "ddgst": false, 00:22:05.001 "dhchap_key": "key3", 00:22:05.001 "allow_unrecognized_csi": false, 00:22:05.001 "method": "bdev_nvme_attach_controller", 00:22:05.001 "req_id": 1 00:22:05.001 } 00:22:05.001 Got JSON-RPC error response 00:22:05.001 response: 00:22:05.001 { 00:22:05.001 "code": -5, 00:22:05.001 "message": "Input/output error" 00:22:05.001 } 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.001 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.258 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.516 request: 00:22:05.516 { 00:22:05.516 "name": "nvme0", 00:22:05.516 "trtype": "tcp", 00:22:05.516 "traddr": "10.0.0.2", 00:22:05.516 "adrfam": "ipv4", 00:22:05.516 "trsvcid": "4420", 00:22:05.516 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.516 "prchk_reftag": false, 00:22:05.516 "prchk_guard": false, 00:22:05.516 "hdgst": false, 00:22:05.516 "ddgst": false, 00:22:05.516 "dhchap_key": "key3", 00:22:05.516 "allow_unrecognized_csi": false, 00:22:05.516 "method": "bdev_nvme_attach_controller", 00:22:05.516 "req_id": 1 00:22:05.516 } 00:22:05.516 Got JSON-RPC error response 00:22:05.516 response: 00:22:05.516 { 00:22:05.516 "code": -5, 00:22:05.516 "message": "Input/output error" 00:22:05.516 } 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.516 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.773 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.338 request: 00:22:06.338 { 00:22:06.338 "name": "nvme0", 00:22:06.338 "trtype": "tcp", 00:22:06.338 "traddr": "10.0.0.2", 00:22:06.338 "adrfam": "ipv4", 00:22:06.338 "trsvcid": "4420", 00:22:06.338 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.338 "prchk_reftag": false, 00:22:06.338 "prchk_guard": false, 00:22:06.338 "hdgst": false, 00:22:06.338 "ddgst": false, 00:22:06.338 "dhchap_key": "key0", 00:22:06.338 "dhchap_ctrlr_key": "key1", 00:22:06.338 "allow_unrecognized_csi": false, 00:22:06.338 "method": "bdev_nvme_attach_controller", 00:22:06.338 "req_id": 1 00:22:06.338 } 00:22:06.338 Got JSON-RPC error response 00:22:06.338 response: 00:22:06.338 { 00:22:06.338 "code": -5, 00:22:06.338 "message": "Input/output error" 00:22:06.338 } 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.338 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.595 nvme0n1 00:22:06.595 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:06.595 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:06.595 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.853 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.853 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.853 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.418 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.788 nvme0n1 00:22:08.788 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:08.788 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:08.788 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.045 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:09.302 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.302 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:22:09.302 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: --dhchap-ctrl-secret DHHC-1:03:MWQ1ZmE2N2JmNmZmOWYyOTJjYTA4OTgxNDJiZjFmNWU3NGFlNmQ4NjUzODQzYThkZDgyOGJjZjA5MTAzYzVmNiitUgU=: 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.235 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:10.492 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:11.424 request: 00:22:11.424 { 00:22:11.424 "name": "nvme0", 00:22:11.424 "trtype": "tcp", 00:22:11.424 "traddr": "10.0.0.2", 00:22:11.424 "adrfam": "ipv4", 00:22:11.424 "trsvcid": "4420", 00:22:11.424 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.424 "prchk_reftag": false, 00:22:11.424 "prchk_guard": false, 00:22:11.424 "hdgst": false, 00:22:11.424 "ddgst": false, 00:22:11.424 "dhchap_key": "key1", 00:22:11.424 "allow_unrecognized_csi": false, 00:22:11.424 "method": "bdev_nvme_attach_controller", 00:22:11.424 "req_id": 1 00:22:11.424 } 00:22:11.424 Got JSON-RPC error response 00:22:11.424 response: 00:22:11.424 { 00:22:11.424 "code": -5, 00:22:11.424 "message": "Input/output error" 00:22:11.424 } 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:11.424 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:12.796 nvme0n1 00:22:12.796 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:12.796 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:12.796 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.055 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.055 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.055 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:13.620 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:13.878 nvme0n1 00:22:13.878 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:13.878 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:13.878 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.136 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.136 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.136 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: '' 2s 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: ]] 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTJkNDE3NGEzYThiMzE3OTQxYzJhMzJhODYwNDcyNDE6+UtJ: 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:14.394 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: 2s 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: ]] 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTA4MjZlYTZiNzYxZGM3NWYyNGVmYmFjM2M0MjY4YTFhYWM2NmIzNDQzM2NmMTdl50iUxg==: 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:16.918 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.815 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.244 nvme0n1 00:22:20.244 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.244 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.244 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.244 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.244 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.244 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.176 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:21.177 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:21.742 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:21.742 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:21.742 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.999 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:22.931 request: 00:22:22.931 { 00:22:22.931 "name": "nvme0", 00:22:22.931 "dhchap_key": "key1", 00:22:22.931 "dhchap_ctrlr_key": "key3", 00:22:22.931 "method": "bdev_nvme_set_keys", 00:22:22.931 "req_id": 1 00:22:22.931 } 00:22:22.931 Got JSON-RPC error response 00:22:22.931 response: 00:22:22.931 { 00:22:22.931 "code": -13, 00:22:22.931 "message": "Permission denied" 00:22:22.931 } 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.931 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:23.189 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:23.189 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:24.119 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:24.119 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:24.119 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.376 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.272 nvme0n1 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:26.272 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:26.836 request: 00:22:26.836 { 00:22:26.836 "name": "nvme0", 00:22:26.836 "dhchap_key": "key2", 00:22:26.836 "dhchap_ctrlr_key": "key0", 00:22:26.836 "method": "bdev_nvme_set_keys", 00:22:26.836 "req_id": 1 00:22:26.836 } 00:22:26.836 Got JSON-RPC error response 00:22:26.836 response: 00:22:26.836 { 00:22:26.836 "code": -13, 00:22:26.836 "message": "Permission denied" 00:22:26.836 } 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:26.836 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.093 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:27.093 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:28.024 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:28.024 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:28.024 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.281 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:28.281 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3821575 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3821575 ']' 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3821575 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3821575 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3821575' 00:22:29.651 killing process with pid 3821575 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3821575 00:22:29.651 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3821575 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.216 rmmod nvme_tcp 00:22:30.216 rmmod nvme_fabrics 00:22:30.216 rmmod nvme_keyring 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3844755 ']' 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3844755 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3844755 ']' 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3844755 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3844755 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3844755' 00:22:30.216 killing process with pid 3844755 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3844755 00:22:30.216 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3844755 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.474 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.c5V /tmp/spdk.key-sha256.p9J /tmp/spdk.key-sha384.raq /tmp/spdk.key-sha512.qqJ /tmp/spdk.key-sha512.7Ts /tmp/spdk.key-sha384.ODc /tmp/spdk.key-sha256.sqy '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:32.377 00:22:32.377 real 3m40.543s 00:22:32.377 user 8m35.984s 00:22:32.377 sys 0m27.337s 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.377 ************************************ 00:22:32.377 END TEST nvmf_auth_target 00:22:32.377 ************************************ 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.377 ************************************ 00:22:32.377 START TEST nvmf_bdevio_no_huge 00:22:32.377 ************************************ 00:22:32.377 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:32.636 * Looking for test storage... 00:22:32.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.636 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.637 --rc genhtml_branch_coverage=1 00:22:32.637 --rc genhtml_function_coverage=1 00:22:32.637 --rc genhtml_legend=1 00:22:32.637 --rc geninfo_all_blocks=1 00:22:32.637 --rc geninfo_unexecuted_blocks=1 00:22:32.637 00:22:32.637 ' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.637 --rc genhtml_branch_coverage=1 00:22:32.637 --rc genhtml_function_coverage=1 00:22:32.637 --rc genhtml_legend=1 00:22:32.637 --rc geninfo_all_blocks=1 00:22:32.637 --rc geninfo_unexecuted_blocks=1 00:22:32.637 00:22:32.637 ' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.637 --rc genhtml_branch_coverage=1 00:22:32.637 --rc genhtml_function_coverage=1 00:22:32.637 --rc genhtml_legend=1 00:22:32.637 --rc geninfo_all_blocks=1 00:22:32.637 --rc geninfo_unexecuted_blocks=1 00:22:32.637 00:22:32.637 ' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.637 --rc genhtml_branch_coverage=1 00:22:32.637 --rc genhtml_function_coverage=1 00:22:32.637 --rc genhtml_legend=1 00:22:32.637 --rc geninfo_all_blocks=1 00:22:32.637 --rc geninfo_unexecuted_blocks=1 00:22:32.637 00:22:32.637 ' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.637 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:35.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.166 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:35.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:35.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:35.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:22:35.167 00:22:35.167 --- 10.0.0.2 ping statistics --- 00:22:35.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.167 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:22:35.167 00:22:35.167 --- 10.0.0.1 ping statistics --- 00:22:35.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.167 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3850364 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3850364 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3850364 ']' 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.167 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.167 [2024-11-02 11:34:35.273802] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:22:35.167 [2024-11-02 11:34:35.273896] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:35.167 [2024-11-02 11:34:35.355803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.167 [2024-11-02 11:34:35.408051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.167 [2024-11-02 11:34:35.408121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.167 [2024-11-02 11:34:35.408147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.167 [2024-11-02 11:34:35.408161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.167 [2024-11-02 11:34:35.408172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.167 [2024-11-02 11:34:35.409342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.167 [2024-11-02 11:34:35.409397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:35.167 [2024-11-02 11:34:35.409450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:35.167 [2024-11-02 11:34:35.409454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.168 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.425 [2024-11-02 11:34:35.571570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.425 Malloc0 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.425 [2024-11-02 11:34:35.609874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.425 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.426 { 00:22:35.426 "params": { 00:22:35.426 "name": "Nvme$subsystem", 00:22:35.426 "trtype": "$TEST_TRANSPORT", 00:22:35.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.426 "adrfam": "ipv4", 00:22:35.426 "trsvcid": "$NVMF_PORT", 00:22:35.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.426 "hdgst": ${hdgst:-false}, 00:22:35.426 "ddgst": ${ddgst:-false} 00:22:35.426 }, 00:22:35.426 "method": "bdev_nvme_attach_controller" 00:22:35.426 } 00:22:35.426 EOF 00:22:35.426 )") 00:22:35.426 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:35.426 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:35.426 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:35.426 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:35.426 "params": { 00:22:35.426 "name": "Nvme1", 00:22:35.426 "trtype": "tcp", 00:22:35.426 "traddr": "10.0.0.2", 00:22:35.426 "adrfam": "ipv4", 00:22:35.426 "trsvcid": "4420", 00:22:35.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.426 "hdgst": false, 00:22:35.426 "ddgst": false 00:22:35.426 }, 00:22:35.426 "method": "bdev_nvme_attach_controller" 00:22:35.426 }' 00:22:35.426 [2024-11-02 11:34:35.656910] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:22:35.426 [2024-11-02 11:34:35.657000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3850394 ] 00:22:35.426 [2024-11-02 11:34:35.727465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.426 [2024-11-02 11:34:35.776097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.426 [2024-11-02 11:34:35.776147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.426 [2024-11-02 11:34:35.776150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.683 I/O targets: 00:22:35.683 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:35.683 00:22:35.683 00:22:35.683 CUnit - A unit testing framework for C - Version 2.1-3 00:22:35.683 http://cunit.sourceforge.net/ 00:22:35.683 00:22:35.683 00:22:35.683 Suite: bdevio tests on: Nvme1n1 00:22:35.683 Test: blockdev write read block ...passed 00:22:35.683 Test: blockdev write zeroes read block ...passed 00:22:35.683 Test: blockdev write zeroes read no split ...passed 00:22:35.940 Test: blockdev write zeroes read split ...passed 00:22:35.940 Test: blockdev write zeroes read split partial ...passed 00:22:35.940 Test: blockdev reset ...[2024-11-02 11:34:36.176783] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:35.940 [2024-11-02 11:34:36.176907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215d4b0 (9): Bad file descriptor 00:22:35.940 [2024-11-02 11:34:36.274220] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:35.940 passed 00:22:35.940 Test: blockdev write read 8 blocks ...passed 00:22:35.940 Test: blockdev write read size > 128k ...passed 00:22:35.940 Test: blockdev write read invalid size ...passed 00:22:36.197 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:36.197 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:36.197 Test: blockdev write read max offset ...passed 00:22:36.197 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:36.197 Test: blockdev writev readv 8 blocks ...passed 00:22:36.197 Test: blockdev writev readv 30 x 1block ...passed 00:22:36.197 Test: blockdev writev readv block ...passed 00:22:36.197 Test: blockdev writev readv size > 128k ...passed 00:22:36.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:36.197 Test: blockdev comparev and writev ...[2024-11-02 11:34:36.491122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.197 [2024-11-02 11:34:36.491158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.491183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.491201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.491558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.491583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.491605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.491621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.491986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.492016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.492039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.492055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.492425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.492450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.492472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:36.198 [2024-11-02 11:34:36.492488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:36.198 passed 00:22:36.198 Test: blockdev nvme passthru rw ...passed 00:22:36.198 Test: blockdev nvme passthru vendor specific ...[2024-11-02 11:34:36.576581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.198 [2024-11-02 11:34:36.576608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.576780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.198 [2024-11-02 11:34:36.576804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.576975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.198 [2024-11-02 11:34:36.576999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:36.198 [2024-11-02 11:34:36.577165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:36.198 [2024-11-02 11:34:36.577189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:36.198 passed 00:22:36.198 Test: blockdev nvme admin passthru ...passed 00:22:36.455 Test: blockdev copy ...passed 00:22:36.455 00:22:36.455 Run Summary: Type Total Ran Passed Failed Inactive 00:22:36.455 suites 1 1 n/a 0 0 00:22:36.455 tests 23 23 23 0 0 00:22:36.455 asserts 152 152 152 0 n/a 00:22:36.455 00:22:36.455 Elapsed time = 1.326 seconds 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.713 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.713 rmmod nvme_tcp 00:22:36.713 rmmod nvme_fabrics 00:22:36.713 rmmod nvme_keyring 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3850364 ']' 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3850364 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3850364 ']' 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3850364 00:22:36.713 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3850364 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3850364' 00:22:36.714 killing process with pid 3850364 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3850364 00:22:36.714 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3850364 00:22:37.279 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.280 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.181 00:22:39.181 real 0m6.764s 00:22:39.181 user 0m10.836s 00:22:39.181 sys 0m2.741s 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.181 ************************************ 00:22:39.181 END TEST nvmf_bdevio_no_huge 00:22:39.181 ************************************ 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.181 ************************************ 00:22:39.181 START TEST nvmf_tls 00:22:39.181 ************************************ 00:22:39.181 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:39.440 * Looking for test storage... 00:22:39.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:39.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.440 --rc genhtml_branch_coverage=1 00:22:39.440 --rc genhtml_function_coverage=1 00:22:39.440 --rc genhtml_legend=1 00:22:39.440 --rc geninfo_all_blocks=1 00:22:39.440 --rc geninfo_unexecuted_blocks=1 00:22:39.440 00:22:39.440 ' 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:39.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.440 --rc genhtml_branch_coverage=1 00:22:39.440 --rc genhtml_function_coverage=1 00:22:39.440 --rc genhtml_legend=1 00:22:39.440 --rc geninfo_all_blocks=1 00:22:39.440 --rc geninfo_unexecuted_blocks=1 00:22:39.440 00:22:39.440 ' 00:22:39.440 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:39.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.440 --rc genhtml_branch_coverage=1 00:22:39.440 --rc genhtml_function_coverage=1 00:22:39.440 --rc genhtml_legend=1 00:22:39.440 --rc geninfo_all_blocks=1 00:22:39.440 --rc geninfo_unexecuted_blocks=1 00:22:39.440 00:22:39.441 ' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:39.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.441 --rc genhtml_branch_coverage=1 00:22:39.441 --rc genhtml_function_coverage=1 00:22:39.441 --rc genhtml_legend=1 00:22:39.441 --rc geninfo_all_blocks=1 00:22:39.441 --rc geninfo_unexecuted_blocks=1 00:22:39.441 00:22:39.441 ' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.441 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:41.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:41.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:41.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:41.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.342 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.343 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.343 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:22:41.600 00:22:41.600 --- 10.0.0.2 ping statistics --- 00:22:41.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.600 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:22:41.600 00:22:41.600 --- 10.0.0.1 ping statistics --- 00:22:41.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.600 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.600 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3852590 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3852590 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3852590 ']' 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:41.601 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.601 [2024-11-02 11:34:41.955727] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:22:41.601 [2024-11-02 11:34:41.955813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.858 [2024-11-02 11:34:42.037699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.858 [2024-11-02 11:34:42.084975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.858 [2024-11-02 11:34:42.085037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.858 [2024-11-02 11:34:42.085054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.858 [2024-11-02 11:34:42.085070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.858 [2024-11-02 11:34:42.085098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.858 [2024-11-02 11:34:42.085760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:41.858 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:42.116 true 00:22:42.116 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.116 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:42.374 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:42.374 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:42.374 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:42.632 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.632 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:42.889 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:42.889 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:42.889 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:43.457 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.457 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:43.457 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:43.457 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:43.458 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.458 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:43.716 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:43.716 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:43.716 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:44.282 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.282 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:44.282 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:44.282 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:44.282 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:44.848 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.848 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:45.106 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.HcdtHDQnal 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.RAsPyUWqxN 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HcdtHDQnal 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.RAsPyUWqxN 00:22:45.107 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:45.365 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:45.623 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.HcdtHDQnal 00:22:45.623 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.HcdtHDQnal 00:22:45.623 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.881 [2024-11-02 11:34:46.258990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.881 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.446 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.446 [2024-11-02 11:34:46.800507] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.446 [2024-11-02 11:34:46.800817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.446 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.703 malloc0 00:22:46.703 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.990 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.HcdtHDQnal 00:22:47.554 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.812 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.HcdtHDQnal 00:22:57.773 Initializing NVMe Controllers 00:22:57.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.773 Initialization complete. Launching workers. 00:22:57.773 ======================================================== 00:22:57.773 Latency(us) 00:22:57.773 Device Information : IOPS MiB/s Average min max 00:22:57.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7745.49 30.26 8265.64 1209.61 9214.56 00:22:57.773 ======================================================== 00:22:57.773 Total : 7745.49 30.26 8265.64 1209.61 9214.56 00:22:57.773 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HcdtHDQnal 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HcdtHDQnal 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.773 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3854488 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3854488 /var/tmp/bdevperf.sock 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3854488 ']' 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.774 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.774 [2024-11-02 11:34:58.146153] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:22:57.774 [2024-11-02 11:34:58.146241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854488 ] 00:22:58.032 [2024-11-02 11:34:58.211994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.032 [2024-11-02 11:34:58.257235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.032 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:58.032 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:58.032 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HcdtHDQnal 00:22:58.597 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.597 [2024-11-02 11:34:58.982709] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.854 TLSTESTn1 00:22:58.854 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.854 Running I/O for 10 seconds... 00:23:01.156 3176.00 IOPS, 12.41 MiB/s [2024-11-02T10:35:02.490Z] 3375.50 IOPS, 13.19 MiB/s [2024-11-02T10:35:03.422Z] 3406.67 IOPS, 13.31 MiB/s [2024-11-02T10:35:04.355Z] 3419.50 IOPS, 13.36 MiB/s [2024-11-02T10:35:05.287Z] 3169.00 IOPS, 12.38 MiB/s [2024-11-02T10:35:06.219Z] 3006.50 IOPS, 11.74 MiB/s [2024-11-02T10:35:07.591Z] 2906.14 IOPS, 11.35 MiB/s [2024-11-02T10:35:08.524Z] 2811.62 IOPS, 10.98 MiB/s [2024-11-02T10:35:09.456Z] 2737.44 IOPS, 10.69 MiB/s [2024-11-02T10:35:09.456Z] 2696.60 IOPS, 10.53 MiB/s 00:23:09.054 Latency(us) 00:23:09.054 [2024-11-02T10:35:09.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.054 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.054 Verification LBA range: start 0x0 length 0x2000 00:23:09.054 TLSTESTn1 : 10.04 2699.31 10.54 0.00 0.00 47336.30 5898.24 72235.24 00:23:09.054 [2024-11-02T10:35:09.456Z] =================================================================================================================== 00:23:09.054 [2024-11-02T10:35:09.456Z] Total : 2699.31 10.54 0.00 0.00 47336.30 5898.24 72235.24 00:23:09.054 { 00:23:09.054 "results": [ 00:23:09.054 { 00:23:09.054 "job": "TLSTESTn1", 00:23:09.054 "core_mask": "0x4", 00:23:09.054 "workload": "verify", 00:23:09.054 "status": "finished", 00:23:09.054 "verify_range": { 00:23:09.054 "start": 0, 00:23:09.054 "length": 8192 00:23:09.054 }, 00:23:09.054 "queue_depth": 128, 00:23:09.054 "io_size": 4096, 00:23:09.054 "runtime": 10.037374, 00:23:09.054 "iops": 2699.311592852872, 00:23:09.054 "mibps": 10.54418590958153, 00:23:09.054 "io_failed": 0, 00:23:09.054 "io_timeout": 0, 00:23:09.054 "avg_latency_us": 47336.30140553191, 00:23:09.054 "min_latency_us": 5898.24, 00:23:09.054 "max_latency_us": 72235.23555555556 00:23:09.054 } 00:23:09.054 ], 00:23:09.054 "core_count": 1 00:23:09.054 } 00:23:09.054 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.054 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3854488 00:23:09.054 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3854488 ']' 00:23:09.054 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3854488 00:23:09.054 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3854488 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3854488' 00:23:09.055 killing process with pid 3854488 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3854488 00:23:09.055 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.055 00:23:09.055 Latency(us) 00:23:09.055 [2024-11-02T10:35:09.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.055 [2024-11-02T10:35:09.457Z] =================================================================================================================== 00:23:09.055 [2024-11-02T10:35:09.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.055 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3854488 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RAsPyUWqxN 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RAsPyUWqxN 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RAsPyUWqxN 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RAsPyUWqxN 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3855808 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3855808 /var/tmp/bdevperf.sock 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3855808 ']' 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.329 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.330 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.330 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.330 [2024-11-02 11:35:09.530078] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:09.330 [2024-11-02 11:35:09.530159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855808 ] 00:23:09.330 [2024-11-02 11:35:09.596967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.330 [2024-11-02 11:35:09.642902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.591 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.591 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:09.591 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RAsPyUWqxN 00:23:09.849 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.107 [2024-11-02 11:35:10.323790] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.107 [2024-11-02 11:35:10.334283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.107 [2024-11-02 11:35:10.335105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160f6e0 (107): Transport endpoint is not connected 00:23:10.107 [2024-11-02 11:35:10.336092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160f6e0 (9): Bad file descriptor 00:23:10.107 [2024-11-02 11:35:10.337091] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:10.107 [2024-11-02 11:35:10.337111] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.107 [2024-11-02 11:35:10.337141] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:10.107 [2024-11-02 11:35:10.337179] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:10.107 request: 00:23:10.107 { 00:23:10.107 "name": "TLSTEST", 00:23:10.107 "trtype": "tcp", 00:23:10.107 "traddr": "10.0.0.2", 00:23:10.107 "adrfam": "ipv4", 00:23:10.107 "trsvcid": "4420", 00:23:10.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.107 "prchk_reftag": false, 00:23:10.107 "prchk_guard": false, 00:23:10.107 "hdgst": false, 00:23:10.107 "ddgst": false, 00:23:10.107 "psk": "key0", 00:23:10.107 "allow_unrecognized_csi": false, 00:23:10.107 "method": "bdev_nvme_attach_controller", 00:23:10.107 "req_id": 1 00:23:10.107 } 00:23:10.107 Got JSON-RPC error response 00:23:10.107 response: 00:23:10.107 { 00:23:10.107 "code": -5, 00:23:10.107 "message": "Input/output error" 00:23:10.107 } 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3855808 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3855808 ']' 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3855808 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3855808 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3855808' 00:23:10.107 killing process with pid 3855808 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3855808 00:23:10.107 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.107 00:23:10.107 Latency(us) 00:23:10.107 [2024-11-02T10:35:10.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.107 [2024-11-02T10:35:10.509Z] =================================================================================================================== 00:23:10.107 [2024-11-02T10:35:10.509Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.107 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3855808 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HcdtHDQnal 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HcdtHDQnal 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HcdtHDQnal 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HcdtHDQnal 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3855952 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3855952 /var/tmp/bdevperf.sock 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3855952 ']' 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:10.366 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.366 [2024-11-02 11:35:10.632712] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:10.366 [2024-11-02 11:35:10.632792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855952 ] 00:23:10.366 [2024-11-02 11:35:10.700612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.366 [2024-11-02 11:35:10.745211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.625 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.625 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:10.625 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HcdtHDQnal 00:23:10.882 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:11.140 [2024-11-02 11:35:11.400178] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.140 [2024-11-02 11:35:11.406967] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:11.140 [2024-11-02 11:35:11.407013] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:11.140 [2024-11-02 11:35:11.407065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.140 [2024-11-02 11:35:11.407542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bd6e0 (107): Transport endpoint is not connected 00:23:11.140 [2024-11-02 11:35:11.408549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bd6e0 (9): Bad file descriptor 00:23:11.140 [2024-11-02 11:35:11.409548] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:11.140 [2024-11-02 11:35:11.409586] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.140 [2024-11-02 11:35:11.409615] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:11.140 [2024-11-02 11:35:11.409636] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:11.140 request: 00:23:11.140 { 00:23:11.140 "name": "TLSTEST", 00:23:11.140 "trtype": "tcp", 00:23:11.140 "traddr": "10.0.0.2", 00:23:11.140 "adrfam": "ipv4", 00:23:11.140 "trsvcid": "4420", 00:23:11.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.140 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.140 "prchk_reftag": false, 00:23:11.140 "prchk_guard": false, 00:23:11.140 "hdgst": false, 00:23:11.140 "ddgst": false, 00:23:11.140 "psk": "key0", 00:23:11.140 "allow_unrecognized_csi": false, 00:23:11.140 "method": "bdev_nvme_attach_controller", 00:23:11.140 "req_id": 1 00:23:11.141 } 00:23:11.141 Got JSON-RPC error response 00:23:11.141 response: 00:23:11.141 { 00:23:11.141 "code": -5, 00:23:11.141 "message": "Input/output error" 00:23:11.141 } 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3855952 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3855952 ']' 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3855952 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3855952 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3855952' 00:23:11.141 killing process with pid 3855952 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3855952 00:23:11.141 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.141 00:23:11.141 Latency(us) 00:23:11.141 [2024-11-02T10:35:11.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.141 [2024-11-02T10:35:11.543Z] =================================================================================================================== 00:23:11.141 [2024-11-02T10:35:11.543Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.141 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3855952 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HcdtHDQnal 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HcdtHDQnal 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HcdtHDQnal 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HcdtHDQnal 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3856093 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3856093 /var/tmp/bdevperf.sock 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3856093 ']' 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:11.399 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.399 [2024-11-02 11:35:11.686251] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:11.399 [2024-11-02 11:35:11.686358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856093 ] 00:23:11.399 [2024-11-02 11:35:11.753553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.399 [2024-11-02 11:35:11.798869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.657 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:11.657 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:11.657 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HcdtHDQnal 00:23:11.914 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:12.171 [2024-11-02 11:35:12.405659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.171 [2024-11-02 11:35:12.411054] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:12.171 [2024-11-02 11:35:12.411086] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:12.171 [2024-11-02 11:35:12.411138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:12.171 [2024-11-02 11:35:12.411698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4a6e0 (107): Transport endpoint is not connected 00:23:12.171 [2024-11-02 11:35:12.412687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4a6e0 (9): Bad file descriptor 00:23:12.172 [2024-11-02 11:35:12.413686] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:12.172 [2024-11-02 11:35:12.413706] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:12.172 [2024-11-02 11:35:12.413734] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:12.172 [2024-11-02 11:35:12.413756] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:12.172 request: 00:23:12.172 { 00:23:12.172 "name": "TLSTEST", 00:23:12.172 "trtype": "tcp", 00:23:12.172 "traddr": "10.0.0.2", 00:23:12.172 "adrfam": "ipv4", 00:23:12.172 "trsvcid": "4420", 00:23:12.172 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.172 "prchk_reftag": false, 00:23:12.172 "prchk_guard": false, 00:23:12.172 "hdgst": false, 00:23:12.172 "ddgst": false, 00:23:12.172 "psk": "key0", 00:23:12.172 "allow_unrecognized_csi": false, 00:23:12.172 "method": "bdev_nvme_attach_controller", 00:23:12.172 "req_id": 1 00:23:12.172 } 00:23:12.172 Got JSON-RPC error response 00:23:12.172 response: 00:23:12.172 { 00:23:12.172 "code": -5, 00:23:12.172 "message": "Input/output error" 00:23:12.172 } 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3856093 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3856093 ']' 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3856093 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3856093 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3856093' 00:23:12.172 killing process with pid 3856093 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3856093 00:23:12.172 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.172 00:23:12.172 Latency(us) 00:23:12.172 [2024-11-02T10:35:12.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.172 [2024-11-02T10:35:12.574Z] =================================================================================================================== 00:23:12.172 [2024-11-02T10:35:12.574Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.172 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3856093 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.429 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3856226 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3856226 /var/tmp/bdevperf.sock 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3856226 ']' 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:12.430 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.430 [2024-11-02 11:35:12.705076] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:12.430 [2024-11-02 11:35:12.705161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856226 ] 00:23:12.430 [2024-11-02 11:35:12.774996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.430 [2024-11-02 11:35:12.821783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.687 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:12.687 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:12.687 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:12.946 [2024-11-02 11:35:13.184420] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:12.946 [2024-11-02 11:35:13.184471] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:12.946 request: 00:23:12.946 { 00:23:12.946 "name": "key0", 00:23:12.946 "path": "", 00:23:12.946 "method": "keyring_file_add_key", 00:23:12.946 "req_id": 1 00:23:12.946 } 00:23:12.946 Got JSON-RPC error response 00:23:12.946 response: 00:23:12.946 { 00:23:12.946 "code": -1, 00:23:12.946 "message": "Operation not permitted" 00:23:12.946 } 00:23:12.946 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.203 [2024-11-02 11:35:13.449232] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.203 [2024-11-02 11:35:13.449312] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:13.203 request: 00:23:13.203 { 00:23:13.203 "name": "TLSTEST", 00:23:13.203 "trtype": "tcp", 00:23:13.203 "traddr": "10.0.0.2", 00:23:13.203 "adrfam": "ipv4", 00:23:13.203 "trsvcid": "4420", 00:23:13.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.203 "prchk_reftag": false, 00:23:13.203 "prchk_guard": false, 00:23:13.203 "hdgst": false, 00:23:13.203 "ddgst": false, 00:23:13.203 "psk": "key0", 00:23:13.203 "allow_unrecognized_csi": false, 00:23:13.203 "method": "bdev_nvme_attach_controller", 00:23:13.203 "req_id": 1 00:23:13.203 } 00:23:13.203 Got JSON-RPC error response 00:23:13.203 response: 00:23:13.203 { 00:23:13.203 "code": -126, 00:23:13.203 "message": "Required key not available" 00:23:13.203 } 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3856226 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3856226 ']' 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3856226 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3856226 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:13.203 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3856226' 00:23:13.203 killing process with pid 3856226 00:23:13.204 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3856226 00:23:13.204 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.204 00:23:13.204 Latency(us) 00:23:13.204 [2024-11-02T10:35:13.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.204 [2024-11-02T10:35:13.606Z] =================================================================================================================== 00:23:13.204 [2024-11-02T10:35:13.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.204 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3856226 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3852590 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3852590 ']' 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3852590 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3852590 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3852590' 00:23:13.461 killing process with pid 3852590 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3852590 00:23:13.461 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3852590 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:13.719 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.dDr9YiJYYW 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.dDr9YiJYYW 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3856382 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3856382 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3856382 ']' 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:13.719 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.719 [2024-11-02 11:35:14.067391] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:13.719 [2024-11-02 11:35:14.067477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.977 [2024-11-02 11:35:14.146479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.977 [2024-11-02 11:35:14.192565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.977 [2024-11-02 11:35:14.192634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.977 [2024-11-02 11:35:14.192650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.977 [2024-11-02 11:35:14.192663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.977 [2024-11-02 11:35:14.192675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.977 [2024-11-02 11:35:14.193324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.dDr9YiJYYW 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dDr9YiJYYW 00:23:13.977 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.235 [2024-11-02 11:35:14.590530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.235 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.493 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.750 [2024-11-02 11:35:15.144035] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.750 [2024-11-02 11:35:15.144319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.008 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:15.265 malloc0 00:23:15.265 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.523 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:15.782 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dDr9YiJYYW 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dDr9YiJYYW 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3856671 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3856671 /var/tmp/bdevperf.sock 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3856671 ']' 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:16.040 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.040 [2024-11-02 11:35:16.284155] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:16.040 [2024-11-02 11:35:16.284231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856671 ] 00:23:16.040 [2024-11-02 11:35:16.350698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.040 [2024-11-02 11:35:16.397899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.298 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.298 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:16.298 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:16.556 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.814 [2024-11-02 11:35:17.042228] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.814 TLSTESTn1 00:23:16.814 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.073 Running I/O for 10 seconds... 00:23:18.939 1339.00 IOPS, 5.23 MiB/s [2024-11-02T10:35:20.331Z] 1398.00 IOPS, 5.46 MiB/s [2024-11-02T10:35:21.264Z] 1417.67 IOPS, 5.54 MiB/s [2024-11-02T10:35:22.635Z] 1425.00 IOPS, 5.57 MiB/s [2024-11-02T10:35:23.567Z] 1431.20 IOPS, 5.59 MiB/s [2024-11-02T10:35:24.499Z] 1435.50 IOPS, 5.61 MiB/s [2024-11-02T10:35:25.432Z] 1429.86 IOPS, 5.59 MiB/s [2024-11-02T10:35:26.364Z] 1432.00 IOPS, 5.59 MiB/s [2024-11-02T10:35:27.298Z] 1429.67 IOPS, 5.58 MiB/s [2024-11-02T10:35:27.298Z] 1431.90 IOPS, 5.59 MiB/s 00:23:26.896 Latency(us) 00:23:26.896 [2024-11-02T10:35:27.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.896 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.896 Verification LBA range: start 0x0 length 0x2000 00:23:26.896 TLSTESTn1 : 10.02 1441.90 5.63 0.00 0.00 88614.27 5024.43 81944.27 00:23:26.896 [2024-11-02T10:35:27.298Z] =================================================================================================================== 00:23:26.896 [2024-11-02T10:35:27.298Z] Total : 1441.90 5.63 0.00 0.00 88614.27 5024.43 81944.27 00:23:26.896 { 00:23:26.896 "results": [ 00:23:26.896 { 00:23:26.896 "job": "TLSTESTn1", 00:23:26.896 "core_mask": "0x4", 00:23:26.896 "workload": "verify", 00:23:26.896 "status": "finished", 00:23:26.896 "verify_range": { 00:23:26.896 "start": 0, 00:23:26.896 "length": 8192 00:23:26.896 }, 00:23:26.896 "queue_depth": 128, 00:23:26.896 "io_size": 4096, 00:23:26.896 "runtime": 10.019395, 00:23:26.896 "iops": 1441.9034283008107, 00:23:26.896 "mibps": 5.632435266800042, 00:23:26.896 "io_failed": 0, 00:23:26.896 "io_timeout": 0, 00:23:26.896 "avg_latency_us": 88614.27082162387, 00:23:26.896 "min_latency_us": 5024.426666666666, 00:23:26.896 "max_latency_us": 81944.27259259259 00:23:26.896 } 00:23:26.896 ], 00:23:26.896 "core_count": 1 00:23:26.896 } 00:23:26.896 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.896 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3856671 00:23:26.896 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3856671 ']' 00:23:26.896 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3856671 00:23:26.896 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3856671 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3856671' 00:23:27.154 killing process with pid 3856671 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3856671 00:23:27.154 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.154 00:23:27.154 Latency(us) 00:23:27.154 [2024-11-02T10:35:27.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.154 [2024-11-02T10:35:27.556Z] =================================================================================================================== 00:23:27.154 [2024-11-02T10:35:27.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3856671 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.dDr9YiJYYW 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dDr9YiJYYW 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dDr9YiJYYW 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dDr9YiJYYW 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dDr9YiJYYW 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3858001 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3858001 /var/tmp/bdevperf.sock 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3858001 ']' 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.154 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.412 [2024-11-02 11:35:27.578806] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:27.413 [2024-11-02 11:35:27.578899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858001 ] 00:23:27.413 [2024-11-02 11:35:27.647700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.413 [2024-11-02 11:35:27.693695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.413 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:27.413 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:27.413 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:27.670 [2024-11-02 11:35:28.058421] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dDr9YiJYYW': 0100666 00:23:27.670 [2024-11-02 11:35:28.058472] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:27.670 request: 00:23:27.670 { 00:23:27.670 "name": "key0", 00:23:27.670 "path": "/tmp/tmp.dDr9YiJYYW", 00:23:27.670 "method": "keyring_file_add_key", 00:23:27.670 "req_id": 1 00:23:27.670 } 00:23:27.670 Got JSON-RPC error response 00:23:27.670 response: 00:23:27.670 { 00:23:27.670 "code": -1, 00:23:27.670 "message": "Operation not permitted" 00:23:27.670 } 00:23:27.928 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.928 [2024-11-02 11:35:28.323213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.928 [2024-11-02 11:35:28.323285] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:27.928 request: 00:23:27.928 { 00:23:27.928 "name": "TLSTEST", 00:23:27.928 "trtype": "tcp", 00:23:27.928 "traddr": "10.0.0.2", 00:23:27.928 "adrfam": "ipv4", 00:23:27.928 "trsvcid": "4420", 00:23:27.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.928 "prchk_reftag": false, 00:23:27.928 "prchk_guard": false, 00:23:27.928 "hdgst": false, 00:23:27.928 "ddgst": false, 00:23:27.928 "psk": "key0", 00:23:27.928 "allow_unrecognized_csi": false, 00:23:27.928 "method": "bdev_nvme_attach_controller", 00:23:27.928 "req_id": 1 00:23:27.928 } 00:23:27.928 Got JSON-RPC error response 00:23:27.928 response: 00:23:27.928 { 00:23:27.928 "code": -126, 00:23:27.928 "message": "Required key not available" 00:23:27.928 } 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3858001 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3858001 ']' 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3858001 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3858001 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3858001' 00:23:28.187 killing process with pid 3858001 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3858001 00:23:28.187 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.187 00:23:28.187 Latency(us) 00:23:28.187 [2024-11-02T10:35:28.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.187 [2024-11-02T10:35:28.589Z] =================================================================================================================== 00:23:28.187 [2024-11-02T10:35:28.589Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3858001 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3856382 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3856382 ']' 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3856382 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.187 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3856382 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3856382' 00:23:28.445 killing process with pid 3856382 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3856382 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3856382 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3858153 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3858153 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3858153 ']' 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:28.445 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.704 [2024-11-02 11:35:28.876686] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:28.704 [2024-11-02 11:35:28.876777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.704 [2024-11-02 11:35:28.955177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.704 [2024-11-02 11:35:29.008226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.704 [2024-11-02 11:35:29.008297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.704 [2024-11-02 11:35:29.008324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.704 [2024-11-02 11:35:29.008345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.704 [2024-11-02 11:35:29.008357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.704 [2024-11-02 11:35:29.009021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.dDr9YiJYYW 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dDr9YiJYYW 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.dDr9YiJYYW 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dDr9YiJYYW 00:23:28.962 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.219 [2024-11-02 11:35:29.472011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.219 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.478 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.736 [2024-11-02 11:35:30.081672] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.736 [2024-11-02 11:35:30.081969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.736 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.993 malloc0 00:23:29.993 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.558 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:30.816 [2024-11-02 11:35:30.984243] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dDr9YiJYYW': 0100666 00:23:30.816 [2024-11-02 11:35:30.984306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:30.816 request: 00:23:30.816 { 00:23:30.816 "name": "key0", 00:23:30.816 "path": "/tmp/tmp.dDr9YiJYYW", 00:23:30.816 "method": "keyring_file_add_key", 00:23:30.816 "req_id": 1 00:23:30.816 } 00:23:30.816 Got JSON-RPC error response 00:23:30.816 response: 00:23:30.816 { 00:23:30.816 "code": -1, 00:23:30.816 "message": "Operation not permitted" 00:23:30.816 } 00:23:30.816 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.074 [2024-11-02 11:35:31.265034] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:31.074 [2024-11-02 11:35:31.265108] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:31.074 request: 00:23:31.074 { 00:23:31.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.074 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.074 "psk": "key0", 00:23:31.074 "method": "nvmf_subsystem_add_host", 00:23:31.074 "req_id": 1 00:23:31.074 } 00:23:31.074 Got JSON-RPC error response 00:23:31.074 response: 00:23:31.074 { 00:23:31.074 "code": -32603, 00:23:31.074 "message": "Internal error" 00:23:31.074 } 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3858153 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3858153 ']' 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3858153 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3858153 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3858153' 00:23:31.074 killing process with pid 3858153 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3858153 00:23:31.074 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3858153 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.dDr9YiJYYW 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3858452 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3858452 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3858452 ']' 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:31.332 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.332 [2024-11-02 11:35:31.599538] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:31.332 [2024-11-02 11:35:31.599652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.332 [2024-11-02 11:35:31.679857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.332 [2024-11-02 11:35:31.725938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.332 [2024-11-02 11:35:31.726010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.332 [2024-11-02 11:35:31.726036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.332 [2024-11-02 11:35:31.726049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.332 [2024-11-02 11:35:31.726061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.332 [2024-11-02 11:35:31.726736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.dDr9YiJYYW 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dDr9YiJYYW 00:23:31.590 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.847 [2024-11-02 11:35:32.125533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.847 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:32.105 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:32.363 [2024-11-02 11:35:32.671001] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.363 [2024-11-02 11:35:32.671287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.363 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:32.621 malloc0 00:23:32.621 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.878 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:33.136 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3858739 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3858739 /var/tmp/bdevperf.sock 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3858739 ']' 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.394 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.653 [2024-11-02 11:35:33.822802] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:33.653 [2024-11-02 11:35:33.822893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858739 ] 00:23:33.653 [2024-11-02 11:35:33.891053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.653 [2024-11-02 11:35:33.935542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.910 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:33.910 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:33.910 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:34.168 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.425 [2024-11-02 11:35:34.594044] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.425 TLSTESTn1 00:23:34.425 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:34.683 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:34.683 "subsystems": [ 00:23:34.683 { 00:23:34.683 "subsystem": "keyring", 00:23:34.683 "config": [ 00:23:34.683 { 00:23:34.683 "method": "keyring_file_add_key", 00:23:34.683 "params": { 00:23:34.683 "name": "key0", 00:23:34.683 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:34.683 } 00:23:34.683 } 00:23:34.683 ] 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "subsystem": "iobuf", 00:23:34.683 "config": [ 00:23:34.683 { 00:23:34.683 "method": "iobuf_set_options", 00:23:34.683 "params": { 00:23:34.683 "small_pool_count": 8192, 00:23:34.683 "large_pool_count": 1024, 00:23:34.683 "small_bufsize": 8192, 00:23:34.683 "large_bufsize": 135168, 00:23:34.683 "enable_numa": false 00:23:34.683 } 00:23:34.683 } 00:23:34.683 ] 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "subsystem": "sock", 00:23:34.683 "config": [ 00:23:34.683 { 00:23:34.683 "method": "sock_set_default_impl", 00:23:34.683 "params": { 00:23:34.683 "impl_name": "posix" 00:23:34.683 } 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "method": "sock_impl_set_options", 00:23:34.683 "params": { 00:23:34.683 "impl_name": "ssl", 00:23:34.683 "recv_buf_size": 4096, 00:23:34.683 "send_buf_size": 4096, 00:23:34.683 "enable_recv_pipe": true, 00:23:34.683 "enable_quickack": false, 00:23:34.683 "enable_placement_id": 0, 00:23:34.683 "enable_zerocopy_send_server": true, 00:23:34.683 "enable_zerocopy_send_client": false, 00:23:34.683 "zerocopy_threshold": 0, 00:23:34.683 "tls_version": 0, 00:23:34.683 "enable_ktls": false 00:23:34.683 } 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "method": "sock_impl_set_options", 00:23:34.683 "params": { 00:23:34.683 "impl_name": "posix", 00:23:34.683 "recv_buf_size": 2097152, 00:23:34.683 "send_buf_size": 2097152, 00:23:34.683 "enable_recv_pipe": true, 00:23:34.683 "enable_quickack": false, 00:23:34.683 "enable_placement_id": 0, 00:23:34.683 "enable_zerocopy_send_server": true, 00:23:34.683 "enable_zerocopy_send_client": false, 00:23:34.683 "zerocopy_threshold": 0, 00:23:34.683 "tls_version": 0, 00:23:34.683 "enable_ktls": false 00:23:34.683 } 00:23:34.683 } 00:23:34.683 ] 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "subsystem": "vmd", 00:23:34.683 "config": [] 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "subsystem": "accel", 00:23:34.683 "config": [ 00:23:34.683 { 00:23:34.683 "method": "accel_set_options", 00:23:34.683 "params": { 00:23:34.683 "small_cache_size": 128, 00:23:34.683 "large_cache_size": 16, 00:23:34.683 "task_count": 2048, 00:23:34.683 "sequence_count": 2048, 00:23:34.683 "buf_count": 2048 00:23:34.683 } 00:23:34.683 } 00:23:34.683 ] 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "subsystem": "bdev", 00:23:34.683 "config": [ 00:23:34.683 { 00:23:34.683 "method": "bdev_set_options", 00:23:34.683 "params": { 00:23:34.683 "bdev_io_pool_size": 65535, 00:23:34.683 "bdev_io_cache_size": 256, 00:23:34.683 "bdev_auto_examine": true, 00:23:34.683 "iobuf_small_cache_size": 128, 00:23:34.683 "iobuf_large_cache_size": 16 00:23:34.683 } 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "method": "bdev_raid_set_options", 00:23:34.683 "params": { 00:23:34.683 "process_window_size_kb": 1024, 00:23:34.683 "process_max_bandwidth_mb_sec": 0 00:23:34.683 } 00:23:34.683 }, 00:23:34.683 { 00:23:34.683 "method": "bdev_iscsi_set_options", 00:23:34.683 "params": { 00:23:34.683 "timeout_sec": 30 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "bdev_nvme_set_options", 00:23:34.684 "params": { 00:23:34.684 "action_on_timeout": "none", 00:23:34.684 "timeout_us": 0, 00:23:34.684 "timeout_admin_us": 0, 00:23:34.684 "keep_alive_timeout_ms": 10000, 00:23:34.684 "arbitration_burst": 0, 00:23:34.684 "low_priority_weight": 0, 00:23:34.684 "medium_priority_weight": 0, 00:23:34.684 "high_priority_weight": 0, 00:23:34.684 "nvme_adminq_poll_period_us": 10000, 00:23:34.684 "nvme_ioq_poll_period_us": 0, 00:23:34.684 "io_queue_requests": 0, 00:23:34.684 "delay_cmd_submit": true, 00:23:34.684 "transport_retry_count": 4, 00:23:34.684 "bdev_retry_count": 3, 00:23:34.684 "transport_ack_timeout": 0, 00:23:34.684 "ctrlr_loss_timeout_sec": 0, 00:23:34.684 "reconnect_delay_sec": 0, 00:23:34.684 "fast_io_fail_timeout_sec": 0, 00:23:34.684 "disable_auto_failback": false, 00:23:34.684 "generate_uuids": false, 00:23:34.684 "transport_tos": 0, 00:23:34.684 "nvme_error_stat": false, 00:23:34.684 "rdma_srq_size": 0, 00:23:34.684 "io_path_stat": false, 00:23:34.684 "allow_accel_sequence": false, 00:23:34.684 "rdma_max_cq_size": 0, 00:23:34.684 "rdma_cm_event_timeout_ms": 0, 00:23:34.684 "dhchap_digests": [ 00:23:34.684 "sha256", 00:23:34.684 "sha384", 00:23:34.684 "sha512" 00:23:34.684 ], 00:23:34.684 "dhchap_dhgroups": [ 00:23:34.684 "null", 00:23:34.684 "ffdhe2048", 00:23:34.684 "ffdhe3072", 00:23:34.684 "ffdhe4096", 00:23:34.684 "ffdhe6144", 00:23:34.684 "ffdhe8192" 00:23:34.684 ] 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "bdev_nvme_set_hotplug", 00:23:34.684 "params": { 00:23:34.684 "period_us": 100000, 00:23:34.684 "enable": false 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "bdev_malloc_create", 00:23:34.684 "params": { 00:23:34.684 "name": "malloc0", 00:23:34.684 "num_blocks": 8192, 00:23:34.684 "block_size": 4096, 00:23:34.684 "physical_block_size": 4096, 00:23:34.684 "uuid": "809bb103-1b6a-4a06-a4b6-4b2640226602", 00:23:34.684 "optimal_io_boundary": 0, 00:23:34.684 "md_size": 0, 00:23:34.684 "dif_type": 0, 00:23:34.684 "dif_is_head_of_md": false, 00:23:34.684 "dif_pi_format": 0 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "bdev_wait_for_examine" 00:23:34.684 } 00:23:34.684 ] 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "subsystem": "nbd", 00:23:34.684 "config": [] 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "subsystem": "scheduler", 00:23:34.684 "config": [ 00:23:34.684 { 00:23:34.684 "method": "framework_set_scheduler", 00:23:34.684 "params": { 00:23:34.684 "name": "static" 00:23:34.684 } 00:23:34.684 } 00:23:34.684 ] 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "subsystem": "nvmf", 00:23:34.684 "config": [ 00:23:34.684 { 00:23:34.684 "method": "nvmf_set_config", 00:23:34.684 "params": { 00:23:34.684 "discovery_filter": "match_any", 00:23:34.684 "admin_cmd_passthru": { 00:23:34.684 "identify_ctrlr": false 00:23:34.684 }, 00:23:34.684 "dhchap_digests": [ 00:23:34.684 "sha256", 00:23:34.684 "sha384", 00:23:34.684 "sha512" 00:23:34.684 ], 00:23:34.684 "dhchap_dhgroups": [ 00:23:34.684 "null", 00:23:34.684 "ffdhe2048", 00:23:34.684 "ffdhe3072", 00:23:34.684 "ffdhe4096", 00:23:34.684 "ffdhe6144", 00:23:34.684 "ffdhe8192" 00:23:34.684 ] 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_set_max_subsystems", 00:23:34.684 "params": { 00:23:34.684 "max_subsystems": 1024 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_set_crdt", 00:23:34.684 "params": { 00:23:34.684 "crdt1": 0, 00:23:34.684 "crdt2": 0, 00:23:34.684 "crdt3": 0 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_create_transport", 00:23:34.684 "params": { 00:23:34.684 "trtype": "TCP", 00:23:34.684 "max_queue_depth": 128, 00:23:34.684 "max_io_qpairs_per_ctrlr": 127, 00:23:34.684 "in_capsule_data_size": 4096, 00:23:34.684 "max_io_size": 131072, 00:23:34.684 "io_unit_size": 131072, 00:23:34.684 "max_aq_depth": 128, 00:23:34.684 "num_shared_buffers": 511, 00:23:34.684 "buf_cache_size": 4294967295, 00:23:34.684 "dif_insert_or_strip": false, 00:23:34.684 "zcopy": false, 00:23:34.684 "c2h_success": false, 00:23:34.684 "sock_priority": 0, 00:23:34.684 "abort_timeout_sec": 1, 00:23:34.684 "ack_timeout": 0, 00:23:34.684 "data_wr_pool_size": 0 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_create_subsystem", 00:23:34.684 "params": { 00:23:34.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.684 "allow_any_host": false, 00:23:34.684 "serial_number": "SPDK00000000000001", 00:23:34.684 "model_number": "SPDK bdev Controller", 00:23:34.684 "max_namespaces": 10, 00:23:34.684 "min_cntlid": 1, 00:23:34.684 "max_cntlid": 65519, 00:23:34.684 "ana_reporting": false 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_subsystem_add_host", 00:23:34.684 "params": { 00:23:34.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.684 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.684 "psk": "key0" 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_subsystem_add_ns", 00:23:34.684 "params": { 00:23:34.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.684 "namespace": { 00:23:34.684 "nsid": 1, 00:23:34.684 "bdev_name": "malloc0", 00:23:34.684 "nguid": "809BB1031B6A4A06A4B64B2640226602", 00:23:34.684 "uuid": "809bb103-1b6a-4a06-a4b6-4b2640226602", 00:23:34.684 "no_auto_visible": false 00:23:34.684 } 00:23:34.684 } 00:23:34.684 }, 00:23:34.684 { 00:23:34.684 "method": "nvmf_subsystem_add_listener", 00:23:34.684 "params": { 00:23:34.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.684 "listen_address": { 00:23:34.684 "trtype": "TCP", 00:23:34.684 "adrfam": "IPv4", 00:23:34.684 "traddr": "10.0.0.2", 00:23:34.684 "trsvcid": "4420" 00:23:34.684 }, 00:23:34.684 "secure_channel": true 00:23:34.684 } 00:23:34.684 } 00:23:34.684 ] 00:23:34.684 } 00:23:34.684 ] 00:23:34.684 }' 00:23:34.684 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:35.251 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:35.251 "subsystems": [ 00:23:35.251 { 00:23:35.251 "subsystem": "keyring", 00:23:35.251 "config": [ 00:23:35.251 { 00:23:35.251 "method": "keyring_file_add_key", 00:23:35.251 "params": { 00:23:35.251 "name": "key0", 00:23:35.251 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:35.251 } 00:23:35.251 } 00:23:35.251 ] 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "subsystem": "iobuf", 00:23:35.251 "config": [ 00:23:35.251 { 00:23:35.251 "method": "iobuf_set_options", 00:23:35.251 "params": { 00:23:35.251 "small_pool_count": 8192, 00:23:35.251 "large_pool_count": 1024, 00:23:35.251 "small_bufsize": 8192, 00:23:35.251 "large_bufsize": 135168, 00:23:35.251 "enable_numa": false 00:23:35.251 } 00:23:35.251 } 00:23:35.251 ] 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "subsystem": "sock", 00:23:35.251 "config": [ 00:23:35.251 { 00:23:35.251 "method": "sock_set_default_impl", 00:23:35.251 "params": { 00:23:35.251 "impl_name": "posix" 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "sock_impl_set_options", 00:23:35.251 "params": { 00:23:35.251 "impl_name": "ssl", 00:23:35.251 "recv_buf_size": 4096, 00:23:35.251 "send_buf_size": 4096, 00:23:35.251 "enable_recv_pipe": true, 00:23:35.251 "enable_quickack": false, 00:23:35.251 "enable_placement_id": 0, 00:23:35.251 "enable_zerocopy_send_server": true, 00:23:35.251 "enable_zerocopy_send_client": false, 00:23:35.251 "zerocopy_threshold": 0, 00:23:35.251 "tls_version": 0, 00:23:35.251 "enable_ktls": false 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "sock_impl_set_options", 00:23:35.251 "params": { 00:23:35.251 "impl_name": "posix", 00:23:35.251 "recv_buf_size": 2097152, 00:23:35.251 "send_buf_size": 2097152, 00:23:35.251 "enable_recv_pipe": true, 00:23:35.251 "enable_quickack": false, 00:23:35.251 "enable_placement_id": 0, 00:23:35.251 "enable_zerocopy_send_server": true, 00:23:35.251 "enable_zerocopy_send_client": false, 00:23:35.251 "zerocopy_threshold": 0, 00:23:35.251 "tls_version": 0, 00:23:35.251 "enable_ktls": false 00:23:35.251 } 00:23:35.251 } 00:23:35.251 ] 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "subsystem": "vmd", 00:23:35.251 "config": [] 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "subsystem": "accel", 00:23:35.251 "config": [ 00:23:35.251 { 00:23:35.251 "method": "accel_set_options", 00:23:35.251 "params": { 00:23:35.251 "small_cache_size": 128, 00:23:35.251 "large_cache_size": 16, 00:23:35.251 "task_count": 2048, 00:23:35.251 "sequence_count": 2048, 00:23:35.251 "buf_count": 2048 00:23:35.251 } 00:23:35.251 } 00:23:35.251 ] 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "subsystem": "bdev", 00:23:35.251 "config": [ 00:23:35.251 { 00:23:35.251 "method": "bdev_set_options", 00:23:35.251 "params": { 00:23:35.251 "bdev_io_pool_size": 65535, 00:23:35.251 "bdev_io_cache_size": 256, 00:23:35.251 "bdev_auto_examine": true, 00:23:35.251 "iobuf_small_cache_size": 128, 00:23:35.251 "iobuf_large_cache_size": 16 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "bdev_raid_set_options", 00:23:35.251 "params": { 00:23:35.251 "process_window_size_kb": 1024, 00:23:35.251 "process_max_bandwidth_mb_sec": 0 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "bdev_iscsi_set_options", 00:23:35.251 "params": { 00:23:35.251 "timeout_sec": 30 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "bdev_nvme_set_options", 00:23:35.251 "params": { 00:23:35.251 "action_on_timeout": "none", 00:23:35.251 "timeout_us": 0, 00:23:35.251 "timeout_admin_us": 0, 00:23:35.251 "keep_alive_timeout_ms": 10000, 00:23:35.251 "arbitration_burst": 0, 00:23:35.251 "low_priority_weight": 0, 00:23:35.251 "medium_priority_weight": 0, 00:23:35.251 "high_priority_weight": 0, 00:23:35.251 "nvme_adminq_poll_period_us": 10000, 00:23:35.251 "nvme_ioq_poll_period_us": 0, 00:23:35.251 "io_queue_requests": 512, 00:23:35.251 "delay_cmd_submit": true, 00:23:35.251 "transport_retry_count": 4, 00:23:35.251 "bdev_retry_count": 3, 00:23:35.251 "transport_ack_timeout": 0, 00:23:35.251 "ctrlr_loss_timeout_sec": 0, 00:23:35.251 "reconnect_delay_sec": 0, 00:23:35.251 "fast_io_fail_timeout_sec": 0, 00:23:35.251 "disable_auto_failback": false, 00:23:35.251 "generate_uuids": false, 00:23:35.251 "transport_tos": 0, 00:23:35.251 "nvme_error_stat": false, 00:23:35.251 "rdma_srq_size": 0, 00:23:35.251 "io_path_stat": false, 00:23:35.251 "allow_accel_sequence": false, 00:23:35.251 "rdma_max_cq_size": 0, 00:23:35.251 "rdma_cm_event_timeout_ms": 0, 00:23:35.251 "dhchap_digests": [ 00:23:35.251 "sha256", 00:23:35.251 "sha384", 00:23:35.251 "sha512" 00:23:35.251 ], 00:23:35.251 "dhchap_dhgroups": [ 00:23:35.251 "null", 00:23:35.251 "ffdhe2048", 00:23:35.251 "ffdhe3072", 00:23:35.251 "ffdhe4096", 00:23:35.251 "ffdhe6144", 00:23:35.251 "ffdhe8192" 00:23:35.251 ] 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "bdev_nvme_attach_controller", 00:23:35.251 "params": { 00:23:35.251 "name": "TLSTEST", 00:23:35.251 "trtype": "TCP", 00:23:35.251 "adrfam": "IPv4", 00:23:35.251 "traddr": "10.0.0.2", 00:23:35.251 "trsvcid": "4420", 00:23:35.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.251 "prchk_reftag": false, 00:23:35.251 "prchk_guard": false, 00:23:35.251 "ctrlr_loss_timeout_sec": 0, 00:23:35.251 "reconnect_delay_sec": 0, 00:23:35.251 "fast_io_fail_timeout_sec": 0, 00:23:35.251 "psk": "key0", 00:23:35.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.251 "hdgst": false, 00:23:35.251 "ddgst": false, 00:23:35.251 "multipath": "multipath" 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "bdev_nvme_set_hotplug", 00:23:35.251 "params": { 00:23:35.251 "period_us": 100000, 00:23:35.251 "enable": false 00:23:35.251 } 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "method": "bdev_wait_for_examine" 00:23:35.251 } 00:23:35.251 ] 00:23:35.251 }, 00:23:35.251 { 00:23:35.251 "subsystem": "nbd", 00:23:35.251 "config": [] 00:23:35.251 } 00:23:35.251 ] 00:23:35.251 }' 00:23:35.251 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3858739 00:23:35.251 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3858739 ']' 00:23:35.251 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3858739 00:23:35.251 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:35.251 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3858739 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3858739' 00:23:35.252 killing process with pid 3858739 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3858739 00:23:35.252 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.252 00:23:35.252 Latency(us) 00:23:35.252 [2024-11-02T10:35:35.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.252 [2024-11-02T10:35:35.654Z] =================================================================================================================== 00:23:35.252 [2024-11-02T10:35:35.654Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3858739 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3858452 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3858452 ']' 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3858452 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3858452 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3858452' 00:23:35.252 killing process with pid 3858452 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3858452 00:23:35.252 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3858452 00:23:35.510 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:35.510 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.510 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.510 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:35.510 "subsystems": [ 00:23:35.510 { 00:23:35.510 "subsystem": "keyring", 00:23:35.510 "config": [ 00:23:35.510 { 00:23:35.510 "method": "keyring_file_add_key", 00:23:35.510 "params": { 00:23:35.510 "name": "key0", 00:23:35.510 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:35.510 } 00:23:35.510 } 00:23:35.510 ] 00:23:35.510 }, 00:23:35.510 { 00:23:35.510 "subsystem": "iobuf", 00:23:35.510 "config": [ 00:23:35.510 { 00:23:35.510 "method": "iobuf_set_options", 00:23:35.510 "params": { 00:23:35.510 "small_pool_count": 8192, 00:23:35.510 "large_pool_count": 1024, 00:23:35.510 "small_bufsize": 8192, 00:23:35.510 "large_bufsize": 135168, 00:23:35.510 "enable_numa": false 00:23:35.510 } 00:23:35.510 } 00:23:35.510 ] 00:23:35.510 }, 00:23:35.510 { 00:23:35.510 "subsystem": "sock", 00:23:35.510 "config": [ 00:23:35.510 { 00:23:35.510 "method": "sock_set_default_impl", 00:23:35.510 "params": { 00:23:35.510 "impl_name": "posix" 00:23:35.510 } 00:23:35.510 }, 00:23:35.510 { 00:23:35.510 "method": "sock_impl_set_options", 00:23:35.510 "params": { 00:23:35.510 "impl_name": "ssl", 00:23:35.510 "recv_buf_size": 4096, 00:23:35.510 "send_buf_size": 4096, 00:23:35.510 "enable_recv_pipe": true, 00:23:35.510 "enable_quickack": false, 00:23:35.510 "enable_placement_id": 0, 00:23:35.510 "enable_zerocopy_send_server": true, 00:23:35.510 "enable_zerocopy_send_client": false, 00:23:35.510 "zerocopy_threshold": 0, 00:23:35.510 "tls_version": 0, 00:23:35.510 "enable_ktls": false 00:23:35.510 } 00:23:35.510 }, 00:23:35.510 { 00:23:35.510 "method": "sock_impl_set_options", 00:23:35.510 "params": { 00:23:35.511 "impl_name": "posix", 00:23:35.511 "recv_buf_size": 2097152, 00:23:35.511 "send_buf_size": 2097152, 00:23:35.511 "enable_recv_pipe": true, 00:23:35.511 "enable_quickack": false, 00:23:35.511 "enable_placement_id": 0, 00:23:35.511 "enable_zerocopy_send_server": true, 00:23:35.511 "enable_zerocopy_send_client": false, 00:23:35.511 "zerocopy_threshold": 0, 00:23:35.511 "tls_version": 0, 00:23:35.511 "enable_ktls": false 00:23:35.511 } 00:23:35.511 } 00:23:35.511 ] 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "subsystem": "vmd", 00:23:35.511 "config": [] 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "subsystem": "accel", 00:23:35.511 "config": [ 00:23:35.511 { 00:23:35.511 "method": "accel_set_options", 00:23:35.511 "params": { 00:23:35.511 "small_cache_size": 128, 00:23:35.511 "large_cache_size": 16, 00:23:35.511 "task_count": 2048, 00:23:35.511 "sequence_count": 2048, 00:23:35.511 "buf_count": 2048 00:23:35.511 } 00:23:35.511 } 00:23:35.511 ] 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "subsystem": "bdev", 00:23:35.511 "config": [ 00:23:35.511 { 00:23:35.511 "method": "bdev_set_options", 00:23:35.511 "params": { 00:23:35.511 "bdev_io_pool_size": 65535, 00:23:35.511 "bdev_io_cache_size": 256, 00:23:35.511 "bdev_auto_examine": true, 00:23:35.511 "iobuf_small_cache_size": 128, 00:23:35.511 "iobuf_large_cache_size": 16 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "bdev_raid_set_options", 00:23:35.511 "params": { 00:23:35.511 "process_window_size_kb": 1024, 00:23:35.511 "process_max_bandwidth_mb_sec": 0 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "bdev_iscsi_set_options", 00:23:35.511 "params": { 00:23:35.511 "timeout_sec": 30 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "bdev_nvme_set_options", 00:23:35.511 "params": { 00:23:35.511 "action_on_timeout": "none", 00:23:35.511 "timeout_us": 0, 00:23:35.511 "timeout_admin_us": 0, 00:23:35.511 "keep_alive_timeout_ms": 10000, 00:23:35.511 "arbitration_burst": 0, 00:23:35.511 "low_priority_weight": 0, 00:23:35.511 "medium_priority_weight": 0, 00:23:35.511 "high_priority_weight": 0, 00:23:35.511 "nvme_adminq_poll_period_us": 10000, 00:23:35.511 "nvme_ioq_poll_period_us": 0, 00:23:35.511 "io_queue_requests": 0, 00:23:35.511 "delay_cmd_submit": true, 00:23:35.511 "transport_retry_count": 4, 00:23:35.511 "bdev_retry_count": 3, 00:23:35.511 "transport_ack_timeout": 0, 00:23:35.511 "ctrlr_loss_timeout_sec": 0, 00:23:35.511 "reconnect_delay_sec": 0, 00:23:35.511 "fast_io_fail_timeout_sec": 0, 00:23:35.511 "disable_auto_failback": false, 00:23:35.511 "generate_uuids": false, 00:23:35.511 "transport_tos": 0, 00:23:35.511 "nvme_error_stat": false, 00:23:35.511 "rdma_srq_size": 0, 00:23:35.511 "io_path_stat": false, 00:23:35.511 "allow_accel_sequence": false, 00:23:35.511 "rdma_max_cq_size": 0, 00:23:35.511 "rdma_cm_event_timeout_ms": 0, 00:23:35.511 "dhchap_digests": [ 00:23:35.511 "sha256", 00:23:35.511 "sha384", 00:23:35.511 "sha512" 00:23:35.511 ], 00:23:35.511 "dhchap_dhgroups": [ 00:23:35.511 "null", 00:23:35.511 "ffdhe2048", 00:23:35.511 "ffdhe3072", 00:23:35.511 "ffdhe4096", 00:23:35.511 "ffdhe6144", 00:23:35.511 "ffdhe8192" 00:23:35.511 ] 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "bdev_nvme_set_hotplug", 00:23:35.511 "params": { 00:23:35.511 "period_us": 100000, 00:23:35.511 "enable": false 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "bdev_malloc_create", 00:23:35.511 "params": { 00:23:35.511 "name": "malloc0", 00:23:35.511 "num_blocks": 8192, 00:23:35.511 "block_size": 4096, 00:23:35.511 "physical_block_size": 4096, 00:23:35.511 "uuid": "809bb103-1b6a-4a06-a4b6-4b2640226602", 00:23:35.511 "optimal_io_boundary": 0, 00:23:35.511 "md_size": 0, 00:23:35.511 "dif_type": 0, 00:23:35.511 "dif_is_head_of_md": false, 00:23:35.511 "dif_pi_format": 0 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "bdev_wait_for_examine" 00:23:35.511 } 00:23:35.511 ] 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "subsystem": "nbd", 00:23:35.511 "config": [] 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "subsystem": "scheduler", 00:23:35.511 "config": [ 00:23:35.511 { 00:23:35.511 "method": "framework_set_scheduler", 00:23:35.511 "params": { 00:23:35.511 "name": "static" 00:23:35.511 } 00:23:35.511 } 00:23:35.511 ] 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "subsystem": "nvmf", 00:23:35.511 "config": [ 00:23:35.511 { 00:23:35.511 "method": "nvmf_set_config", 00:23:35.511 "params": { 00:23:35.511 "discovery_filter": "match_any", 00:23:35.511 "admin_cmd_passthru": { 00:23:35.511 "identify_ctrlr": false 00:23:35.511 }, 00:23:35.511 "dhchap_digests": [ 00:23:35.511 "sha256", 00:23:35.511 "sha384", 00:23:35.511 "sha512" 00:23:35.511 ], 00:23:35.511 "dhchap_dhgroups": [ 00:23:35.511 "null", 00:23:35.511 "ffdhe2048", 00:23:35.511 "ffdhe3072", 00:23:35.511 "ffdhe4096", 00:23:35.511 "ffdhe6144", 00:23:35.511 "ffdhe8192" 00:23:35.511 ] 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_set_max_subsystems", 00:23:35.511 "params": { 00:23:35.511 "max_subsystems": 1024 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_set_crdt", 00:23:35.511 "params": { 00:23:35.511 "crdt1": 0, 00:23:35.511 "crdt2": 0, 00:23:35.511 "crdt3": 0 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_create_transport", 00:23:35.511 "params": { 00:23:35.511 "trtype": "TCP", 00:23:35.511 "max_queue_depth": 128, 00:23:35.511 "max_io_qpairs_per_ctrlr": 127, 00:23:35.511 "in_capsule_data_size": 4096, 00:23:35.511 "max_io_size": 131072, 00:23:35.511 "io_unit_size": 131072, 00:23:35.511 "max_aq_depth": 128, 00:23:35.511 "num_shared_buffers": 511, 00:23:35.511 "buf_cache_size": 4294967295, 00:23:35.511 "dif_insert_or_strip": false, 00:23:35.511 "zcopy": false, 00:23:35.511 "c2h_success": false, 00:23:35.511 "sock_priority": 0, 00:23:35.511 "abort_timeout_sec": 1, 00:23:35.511 "ack_timeout": 0, 00:23:35.511 "data_wr_pool_size": 0 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_create_subsystem", 00:23:35.511 "params": { 00:23:35.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.511 "allow_any_host": false, 00:23:35.511 "serial_number": "SPDK00000000000001", 00:23:35.511 "model_number": "SPDK bdev Controller", 00:23:35.511 "max_namespaces": 10, 00:23:35.511 "min_cntlid": 1, 00:23:35.511 "max_cntlid": 65519, 00:23:35.511 "ana_reporting": false 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_subsystem_add_host", 00:23:35.511 "params": { 00:23:35.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.511 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.511 "psk": "key0" 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_subsystem_add_ns", 00:23:35.511 "params": { 00:23:35.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.511 "namespace": { 00:23:35.511 "nsid": 1, 00:23:35.511 "bdev_name": "malloc0", 00:23:35.511 "nguid": "809BB1031B6A4A06A4B64B2640226602", 00:23:35.511 "uuid": "809bb103-1b6a-4a06-a4b6-4b2640226602", 00:23:35.511 "no_auto_visible": false 00:23:35.511 } 00:23:35.511 } 00:23:35.511 }, 00:23:35.511 { 00:23:35.511 "method": "nvmf_subsystem_add_listener", 00:23:35.511 "params": { 00:23:35.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.511 "listen_address": { 00:23:35.511 "trtype": "TCP", 00:23:35.511 "adrfam": "IPv4", 00:23:35.511 "traddr": "10.0.0.2", 00:23:35.512 "trsvcid": "4420" 00:23:35.512 }, 00:23:35.512 "secure_channel": true 00:23:35.512 } 00:23:35.512 } 00:23:35.512 ] 00:23:35.512 } 00:23:35.512 ] 00:23:35.512 }' 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3859018 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3859018 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3859018 ']' 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:35.512 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.769 [2024-11-02 11:35:35.912304] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:35.769 [2024-11-02 11:35:35.912397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.769 [2024-11-02 11:35:35.989266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.769 [2024-11-02 11:35:36.041163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.769 [2024-11-02 11:35:36.041225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.769 [2024-11-02 11:35:36.041242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.769 [2024-11-02 11:35:36.041264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.769 [2024-11-02 11:35:36.041278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.769 [2024-11-02 11:35:36.041966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.028 [2024-11-02 11:35:36.279898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.028 [2024-11-02 11:35:36.311920] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.028 [2024-11-02 11:35:36.312187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3859059 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3859059 /var/tmp/bdevperf.sock 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3859059 ']' 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.028 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:36.028 "subsystems": [ 00:23:36.028 { 00:23:36.028 "subsystem": "keyring", 00:23:36.028 "config": [ 00:23:36.028 { 00:23:36.028 "method": "keyring_file_add_key", 00:23:36.028 "params": { 00:23:36.028 "name": "key0", 00:23:36.028 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:36.028 } 00:23:36.028 } 00:23:36.028 ] 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "subsystem": "iobuf", 00:23:36.028 "config": [ 00:23:36.028 { 00:23:36.028 "method": "iobuf_set_options", 00:23:36.028 "params": { 00:23:36.028 "small_pool_count": 8192, 00:23:36.028 "large_pool_count": 1024, 00:23:36.028 "small_bufsize": 8192, 00:23:36.028 "large_bufsize": 135168, 00:23:36.028 "enable_numa": false 00:23:36.028 } 00:23:36.028 } 00:23:36.028 ] 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "subsystem": "sock", 00:23:36.028 "config": [ 00:23:36.028 { 00:23:36.028 "method": "sock_set_default_impl", 00:23:36.028 "params": { 00:23:36.028 "impl_name": "posix" 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "sock_impl_set_options", 00:23:36.028 "params": { 00:23:36.028 "impl_name": "ssl", 00:23:36.028 "recv_buf_size": 4096, 00:23:36.028 "send_buf_size": 4096, 00:23:36.028 "enable_recv_pipe": true, 00:23:36.028 "enable_quickack": false, 00:23:36.028 "enable_placement_id": 0, 00:23:36.028 "enable_zerocopy_send_server": true, 00:23:36.028 "enable_zerocopy_send_client": false, 00:23:36.028 "zerocopy_threshold": 0, 00:23:36.028 "tls_version": 0, 00:23:36.028 "enable_ktls": false 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "sock_impl_set_options", 00:23:36.028 "params": { 00:23:36.028 "impl_name": "posix", 00:23:36.028 "recv_buf_size": 2097152, 00:23:36.028 "send_buf_size": 2097152, 00:23:36.028 "enable_recv_pipe": true, 00:23:36.028 "enable_quickack": false, 00:23:36.028 "enable_placement_id": 0, 00:23:36.028 "enable_zerocopy_send_server": true, 00:23:36.028 "enable_zerocopy_send_client": false, 00:23:36.028 "zerocopy_threshold": 0, 00:23:36.028 "tls_version": 0, 00:23:36.028 "enable_ktls": false 00:23:36.028 } 00:23:36.028 } 00:23:36.028 ] 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "subsystem": "vmd", 00:23:36.028 "config": [] 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "subsystem": "accel", 00:23:36.028 "config": [ 00:23:36.028 { 00:23:36.028 "method": "accel_set_options", 00:23:36.028 "params": { 00:23:36.028 "small_cache_size": 128, 00:23:36.028 "large_cache_size": 16, 00:23:36.028 "task_count": 2048, 00:23:36.028 "sequence_count": 2048, 00:23:36.028 "buf_count": 2048 00:23:36.028 } 00:23:36.028 } 00:23:36.028 ] 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "subsystem": "bdev", 00:23:36.028 "config": [ 00:23:36.028 { 00:23:36.028 "method": "bdev_set_options", 00:23:36.028 "params": { 00:23:36.028 "bdev_io_pool_size": 65535, 00:23:36.028 "bdev_io_cache_size": 256, 00:23:36.028 "bdev_auto_examine": true, 00:23:36.028 "iobuf_small_cache_size": 128, 00:23:36.028 "iobuf_large_cache_size": 16 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "bdev_raid_set_options", 00:23:36.028 "params": { 00:23:36.028 "process_window_size_kb": 1024, 00:23:36.028 "process_max_bandwidth_mb_sec": 0 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "bdev_iscsi_set_options", 00:23:36.028 "params": { 00:23:36.028 "timeout_sec": 30 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "bdev_nvme_set_options", 00:23:36.028 "params": { 00:23:36.028 "action_on_timeout": "none", 00:23:36.028 "timeout_us": 0, 00:23:36.028 "timeout_admin_us": 0, 00:23:36.028 "keep_alive_timeout_ms": 10000, 00:23:36.028 "arbitration_burst": 0, 00:23:36.028 "low_priority_weight": 0, 00:23:36.028 "medium_priority_weight": 0, 00:23:36.028 "high_priority_weight": 0, 00:23:36.028 "nvme_adminq_poll_period_us": 10000, 00:23:36.028 "nvme_ioq_poll_period_us": 0, 00:23:36.028 "io_queue_requests": 512, 00:23:36.028 "delay_cmd_submit": true, 00:23:36.028 "transport_retry_count": 4, 00:23:36.028 "bdev_retry_count": 3, 00:23:36.028 "transport_ack_timeout": 0, 00:23:36.028 "ctrlr_loss_timeout_sec": 0, 00:23:36.028 "reconnect_delay_sec": 0, 00:23:36.028 "fast_io_fail_timeout_sec": 0, 00:23:36.028 "disable_auto_failback": false, 00:23:36.028 "generate_uuids": false, 00:23:36.028 "transport_tos": 0, 00:23:36.028 "nvme_error_stat": false, 00:23:36.028 "rdma_srq_size": 0, 00:23:36.028 "io_path_stat": false, 00:23:36.028 "allow_accel_sequence": false, 00:23:36.028 "rdma_max_cq_size": 0, 00:23:36.028 "rdma_cm_event_timeout_ms": 0, 00:23:36.028 "dhchap_digests": [ 00:23:36.028 "sha256", 00:23:36.028 "sha384", 00:23:36.028 "sha512" 00:23:36.028 ], 00:23:36.028 "dhchap_dhgroups": [ 00:23:36.028 "null", 00:23:36.028 "ffdhe2048", 00:23:36.028 "ffdhe3072", 00:23:36.028 "ffdhe4096", 00:23:36.028 "ffdhe6144", 00:23:36.028 "ffdhe8192" 00:23:36.028 ] 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "bdev_nvme_attach_controller", 00:23:36.028 "params": { 00:23:36.028 "name": "TLSTEST", 00:23:36.028 "trtype": "TCP", 00:23:36.028 "adrfam": "IPv4", 00:23:36.028 "traddr": "10.0.0.2", 00:23:36.028 "trsvcid": "4420", 00:23:36.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.028 "prchk_reftag": false, 00:23:36.028 "prchk_guard": false, 00:23:36.028 "ctrlr_loss_timeout_sec": 0, 00:23:36.028 "reconnect_delay_sec": 0, 00:23:36.028 "fast_io_fail_timeout_sec": 0, 00:23:36.028 "psk": "key0", 00:23:36.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.028 "hdgst": false, 00:23:36.028 "ddgst": false, 00:23:36.028 "multipath": "multipath" 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.028 "method": "bdev_nvme_set_hotplug", 00:23:36.028 "params": { 00:23:36.028 "period_us": 100000, 00:23:36.028 "enable": false 00:23:36.028 } 00:23:36.028 }, 00:23:36.028 { 00:23:36.029 "method": "bdev_wait_for_examine" 00:23:36.029 } 00:23:36.029 ] 00:23:36.029 }, 00:23:36.029 { 00:23:36.029 "subsystem": "nbd", 00:23:36.029 "config": [] 00:23:36.029 } 00:23:36.029 ] 00:23:36.029 }' 00:23:36.029 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.029 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.029 [2024-11-02 11:35:36.412982] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:36.029 [2024-11-02 11:35:36.413072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859059 ] 00:23:36.286 [2024-11-02 11:35:36.488665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.286 [2024-11-02 11:35:36.534382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.555 [2024-11-02 11:35:36.702310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.555 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.555 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:36.555 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:36.555 Running I/O for 10 seconds... 00:23:38.860 3283.00 IOPS, 12.82 MiB/s [2024-11-02T10:35:40.197Z] 3459.00 IOPS, 13.51 MiB/s [2024-11-02T10:35:41.128Z] 3459.33 IOPS, 13.51 MiB/s [2024-11-02T10:35:42.060Z] 3443.25 IOPS, 13.45 MiB/s [2024-11-02T10:35:42.992Z] 3444.00 IOPS, 13.45 MiB/s [2024-11-02T10:35:44.363Z] 3449.67 IOPS, 13.48 MiB/s [2024-11-02T10:35:45.296Z] 3434.57 IOPS, 13.42 MiB/s [2024-11-02T10:35:46.306Z] 3451.88 IOPS, 13.48 MiB/s [2024-11-02T10:35:47.239Z] 3447.89 IOPS, 13.47 MiB/s [2024-11-02T10:35:47.239Z] 3443.30 IOPS, 13.45 MiB/s 00:23:46.837 Latency(us) 00:23:46.837 [2024-11-02T10:35:47.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.837 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:46.837 Verification LBA range: start 0x0 length 0x2000 00:23:46.837 TLSTESTn1 : 10.03 3444.21 13.45 0.00 0.00 37079.32 10194.49 53205.52 00:23:46.837 [2024-11-02T10:35:47.239Z] =================================================================================================================== 00:23:46.837 [2024-11-02T10:35:47.239Z] Total : 3444.21 13.45 0.00 0.00 37079.32 10194.49 53205.52 00:23:46.837 { 00:23:46.837 "results": [ 00:23:46.837 { 00:23:46.837 "job": "TLSTESTn1", 00:23:46.837 "core_mask": "0x4", 00:23:46.837 "workload": "verify", 00:23:46.837 "status": "finished", 00:23:46.837 "verify_range": { 00:23:46.837 "start": 0, 00:23:46.837 "length": 8192 00:23:46.837 }, 00:23:46.837 "queue_depth": 128, 00:23:46.837 "io_size": 4096, 00:23:46.837 "runtime": 10.034235, 00:23:46.837 "iops": 3444.2087513397883, 00:23:46.837 "mibps": 13.453940434921048, 00:23:46.837 "io_failed": 0, 00:23:46.837 "io_timeout": 0, 00:23:46.837 "avg_latency_us": 37079.319528120715, 00:23:46.837 "min_latency_us": 10194.488888888889, 00:23:46.837 "max_latency_us": 53205.52296296296 00:23:46.837 } 00:23:46.837 ], 00:23:46.837 "core_count": 1 00:23:46.837 } 00:23:46.837 11:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3859059 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3859059 ']' 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3859059 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3859059 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3859059' 00:23:46.837 killing process with pid 3859059 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3859059 00:23:46.837 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.837 00:23:46.837 Latency(us) 00:23:46.837 [2024-11-02T10:35:47.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.837 [2024-11-02T10:35:47.239Z] =================================================================================================================== 00:23:46.837 [2024-11-02T10:35:47.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3859059 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3859018 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3859018 ']' 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3859018 00:23:46.837 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3859018 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3859018' 00:23:47.095 killing process with pid 3859018 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3859018 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3859018 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3860367 00:23:47.095 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3860367 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3860367 ']' 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:47.354 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.354 [2024-11-02 11:35:47.547170] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:47.354 [2024-11-02 11:35:47.547280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.354 [2024-11-02 11:35:47.620149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.354 [2024-11-02 11:35:47.663369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.354 [2024-11-02 11:35:47.663428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.354 [2024-11-02 11:35:47.663451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.354 [2024-11-02 11:35:47.663463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.354 [2024-11-02 11:35:47.663473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.354 [2024-11-02 11:35:47.664049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.dDr9YiJYYW 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dDr9YiJYYW 00:23:47.612 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.869 [2024-11-02 11:35:48.046615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.869 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.126 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:48.384 [2024-11-02 11:35:48.612140] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.384 [2024-11-02 11:35:48.612418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.384 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:48.641 malloc0 00:23:48.641 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.899 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:49.156 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3860656 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3860656 /var/tmp/bdevperf.sock 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3860656 ']' 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.421 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.421 [2024-11-02 11:35:49.770392] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:49.422 [2024-11-02 11:35:49.770472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860656 ] 00:23:49.687 [2024-11-02 11:35:49.842876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.687 [2024-11-02 11:35:49.891683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.687 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:49.687 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:49.688 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:49.945 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:50.202 [2024-11-02 11:35:50.602732] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.459 nvme0n1 00:23:50.459 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.459 Running I/O for 1 seconds... 00:23:51.832 3285.00 IOPS, 12.83 MiB/s 00:23:51.832 Latency(us) 00:23:51.832 [2024-11-02T10:35:52.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.832 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:51.832 Verification LBA range: start 0x0 length 0x2000 00:23:51.832 nvme0n1 : 1.05 3244.93 12.68 0.00 0.00 38812.51 8835.22 55535.69 00:23:51.832 [2024-11-02T10:35:52.234Z] =================================================================================================================== 00:23:51.832 [2024-11-02T10:35:52.234Z] Total : 3244.93 12.68 0.00 0.00 38812.51 8835.22 55535.69 00:23:51.832 { 00:23:51.832 "results": [ 00:23:51.832 { 00:23:51.832 "job": "nvme0n1", 00:23:51.832 "core_mask": "0x2", 00:23:51.832 "workload": "verify", 00:23:51.832 "status": "finished", 00:23:51.832 "verify_range": { 00:23:51.832 "start": 0, 00:23:51.832 "length": 8192 00:23:51.832 }, 00:23:51.832 "queue_depth": 128, 00:23:51.832 "io_size": 4096, 00:23:51.832 "runtime": 1.051794, 00:23:51.832 "iops": 3244.931992386342, 00:23:51.832 "mibps": 12.675515595259148, 00:23:51.832 "io_failed": 0, 00:23:51.832 "io_timeout": 0, 00:23:51.832 "avg_latency_us": 38812.50850625604, 00:23:51.832 "min_latency_us": 8835.223703703703, 00:23:51.832 "max_latency_us": 55535.69185185185 00:23:51.832 } 00:23:51.832 ], 00:23:51.832 "core_count": 1 00:23:51.832 } 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3860656 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3860656 ']' 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3860656 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3860656 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3860656' 00:23:51.832 killing process with pid 3860656 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3860656 00:23:51.832 Received shutdown signal, test time was about 1.000000 seconds 00:23:51.832 00:23:51.832 Latency(us) 00:23:51.832 [2024-11-02T10:35:52.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.832 [2024-11-02T10:35:52.234Z] =================================================================================================================== 00:23:51.832 [2024-11-02T10:35:52.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.832 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3860656 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3860367 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3860367 ']' 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3860367 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3860367 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3860367' 00:23:51.832 killing process with pid 3860367 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3860367 00:23:51.832 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3860367 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3861024 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3861024 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3861024 ']' 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.090 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.090 [2024-11-02 11:35:52.422408] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:52.090 [2024-11-02 11:35:52.422499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.348 [2024-11-02 11:35:52.494807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.348 [2024-11-02 11:35:52.539082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.348 [2024-11-02 11:35:52.539137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.348 [2024-11-02 11:35:52.539160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.348 [2024-11-02 11:35:52.539171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.348 [2024-11-02 11:35:52.539180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.348 [2024-11-02 11:35:52.539736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.348 [2024-11-02 11:35:52.680813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.348 malloc0 00:23:52.348 [2024-11-02 11:35:52.712401] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.348 [2024-11-02 11:35:52.712673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3861076 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3861076 /var/tmp/bdevperf.sock 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3861076 ']' 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.348 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.606 [2024-11-02 11:35:52.785926] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:52.606 [2024-11-02 11:35:52.785992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861076 ] 00:23:52.606 [2024-11-02 11:35:52.857573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.606 [2024-11-02 11:35:52.907554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.864 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.864 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:52.864 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dDr9YiJYYW 00:23:53.121 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:53.378 [2024-11-02 11:35:53.542452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.378 nvme0n1 00:23:53.378 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.378 Running I/O for 1 seconds... 00:23:54.751 3154.00 IOPS, 12.32 MiB/s 00:23:54.751 Latency(us) 00:23:54.751 [2024-11-02T10:35:55.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.751 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:54.751 Verification LBA range: start 0x0 length 0x2000 00:23:54.751 nvme0n1 : 1.04 3165.67 12.37 0.00 0.00 39772.03 6310.87 52817.16 00:23:54.751 [2024-11-02T10:35:55.153Z] =================================================================================================================== 00:23:54.751 [2024-11-02T10:35:55.153Z] Total : 3165.67 12.37 0.00 0.00 39772.03 6310.87 52817.16 00:23:54.751 { 00:23:54.751 "results": [ 00:23:54.751 { 00:23:54.751 "job": "nvme0n1", 00:23:54.751 "core_mask": "0x2", 00:23:54.751 "workload": "verify", 00:23:54.751 "status": "finished", 00:23:54.751 "verify_range": { 00:23:54.751 "start": 0, 00:23:54.751 "length": 8192 00:23:54.751 }, 00:23:54.751 "queue_depth": 128, 00:23:54.751 "io_size": 4096, 00:23:54.751 "runtime": 1.036748, 00:23:54.751 "iops": 3165.668031189836, 00:23:54.751 "mibps": 12.365890746835296, 00:23:54.751 "io_failed": 0, 00:23:54.751 "io_timeout": 0, 00:23:54.751 "avg_latency_us": 39772.03334913219, 00:23:54.751 "min_latency_us": 6310.874074074074, 00:23:54.751 "max_latency_us": 52817.16148148148 00:23:54.751 } 00:23:54.751 ], 00:23:54.751 "core_count": 1 00:23:54.751 } 00:23:54.751 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:54.751 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.751 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.751 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.751 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:54.751 "subsystems": [ 00:23:54.751 { 00:23:54.751 "subsystem": "keyring", 00:23:54.751 "config": [ 00:23:54.751 { 00:23:54.751 "method": "keyring_file_add_key", 00:23:54.751 "params": { 00:23:54.751 "name": "key0", 00:23:54.751 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:54.751 } 00:23:54.751 } 00:23:54.751 ] 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "subsystem": "iobuf", 00:23:54.751 "config": [ 00:23:54.751 { 00:23:54.751 "method": "iobuf_set_options", 00:23:54.751 "params": { 00:23:54.751 "small_pool_count": 8192, 00:23:54.751 "large_pool_count": 1024, 00:23:54.751 "small_bufsize": 8192, 00:23:54.751 "large_bufsize": 135168, 00:23:54.751 "enable_numa": false 00:23:54.751 } 00:23:54.751 } 00:23:54.751 ] 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "subsystem": "sock", 00:23:54.751 "config": [ 00:23:54.751 { 00:23:54.751 "method": "sock_set_default_impl", 00:23:54.751 "params": { 00:23:54.751 "impl_name": "posix" 00:23:54.751 } 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "method": "sock_impl_set_options", 00:23:54.751 "params": { 00:23:54.751 "impl_name": "ssl", 00:23:54.751 "recv_buf_size": 4096, 00:23:54.751 "send_buf_size": 4096, 00:23:54.751 "enable_recv_pipe": true, 00:23:54.751 "enable_quickack": false, 00:23:54.751 "enable_placement_id": 0, 00:23:54.751 "enable_zerocopy_send_server": true, 00:23:54.751 "enable_zerocopy_send_client": false, 00:23:54.751 "zerocopy_threshold": 0, 00:23:54.751 "tls_version": 0, 00:23:54.751 "enable_ktls": false 00:23:54.751 } 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "method": "sock_impl_set_options", 00:23:54.751 "params": { 00:23:54.751 "impl_name": "posix", 00:23:54.751 "recv_buf_size": 2097152, 00:23:54.751 "send_buf_size": 2097152, 00:23:54.751 "enable_recv_pipe": true, 00:23:54.751 "enable_quickack": false, 00:23:54.751 "enable_placement_id": 0, 00:23:54.751 "enable_zerocopy_send_server": true, 00:23:54.751 "enable_zerocopy_send_client": false, 00:23:54.751 "zerocopy_threshold": 0, 00:23:54.751 "tls_version": 0, 00:23:54.751 "enable_ktls": false 00:23:54.751 } 00:23:54.751 } 00:23:54.751 ] 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "subsystem": "vmd", 00:23:54.751 "config": [] 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "subsystem": "accel", 00:23:54.751 "config": [ 00:23:54.751 { 00:23:54.751 "method": "accel_set_options", 00:23:54.751 "params": { 00:23:54.751 "small_cache_size": 128, 00:23:54.751 "large_cache_size": 16, 00:23:54.751 "task_count": 2048, 00:23:54.751 "sequence_count": 2048, 00:23:54.751 "buf_count": 2048 00:23:54.751 } 00:23:54.751 } 00:23:54.751 ] 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "subsystem": "bdev", 00:23:54.751 "config": [ 00:23:54.751 { 00:23:54.751 "method": "bdev_set_options", 00:23:54.751 "params": { 00:23:54.751 "bdev_io_pool_size": 65535, 00:23:54.751 "bdev_io_cache_size": 256, 00:23:54.751 "bdev_auto_examine": true, 00:23:54.751 "iobuf_small_cache_size": 128, 00:23:54.751 "iobuf_large_cache_size": 16 00:23:54.751 } 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "method": "bdev_raid_set_options", 00:23:54.751 "params": { 00:23:54.751 "process_window_size_kb": 1024, 00:23:54.751 "process_max_bandwidth_mb_sec": 0 00:23:54.751 } 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "method": "bdev_iscsi_set_options", 00:23:54.751 "params": { 00:23:54.751 "timeout_sec": 30 00:23:54.751 } 00:23:54.751 }, 00:23:54.751 { 00:23:54.751 "method": "bdev_nvme_set_options", 00:23:54.752 "params": { 00:23:54.752 "action_on_timeout": "none", 00:23:54.752 "timeout_us": 0, 00:23:54.752 "timeout_admin_us": 0, 00:23:54.752 "keep_alive_timeout_ms": 10000, 00:23:54.752 "arbitration_burst": 0, 00:23:54.752 "low_priority_weight": 0, 00:23:54.752 "medium_priority_weight": 0, 00:23:54.752 "high_priority_weight": 0, 00:23:54.752 "nvme_adminq_poll_period_us": 10000, 00:23:54.752 "nvme_ioq_poll_period_us": 0, 00:23:54.752 "io_queue_requests": 0, 00:23:54.752 "delay_cmd_submit": true, 00:23:54.752 "transport_retry_count": 4, 00:23:54.752 "bdev_retry_count": 3, 00:23:54.752 "transport_ack_timeout": 0, 00:23:54.752 "ctrlr_loss_timeout_sec": 0, 00:23:54.752 "reconnect_delay_sec": 0, 00:23:54.752 "fast_io_fail_timeout_sec": 0, 00:23:54.752 "disable_auto_failback": false, 00:23:54.752 "generate_uuids": false, 00:23:54.752 "transport_tos": 0, 00:23:54.752 "nvme_error_stat": false, 00:23:54.752 "rdma_srq_size": 0, 00:23:54.752 "io_path_stat": false, 00:23:54.752 "allow_accel_sequence": false, 00:23:54.752 "rdma_max_cq_size": 0, 00:23:54.752 "rdma_cm_event_timeout_ms": 0, 00:23:54.752 "dhchap_digests": [ 00:23:54.752 "sha256", 00:23:54.752 "sha384", 00:23:54.752 "sha512" 00:23:54.752 ], 00:23:54.752 "dhchap_dhgroups": [ 00:23:54.752 "null", 00:23:54.752 "ffdhe2048", 00:23:54.752 "ffdhe3072", 00:23:54.752 "ffdhe4096", 00:23:54.752 "ffdhe6144", 00:23:54.752 "ffdhe8192" 00:23:54.752 ] 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "bdev_nvme_set_hotplug", 00:23:54.752 "params": { 00:23:54.752 "period_us": 100000, 00:23:54.752 "enable": false 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "bdev_malloc_create", 00:23:54.752 "params": { 00:23:54.752 "name": "malloc0", 00:23:54.752 "num_blocks": 8192, 00:23:54.752 "block_size": 4096, 00:23:54.752 "physical_block_size": 4096, 00:23:54.752 "uuid": "6187d85a-cc3b-4750-8379-ecd8f37b2e49", 00:23:54.752 "optimal_io_boundary": 0, 00:23:54.752 "md_size": 0, 00:23:54.752 "dif_type": 0, 00:23:54.752 "dif_is_head_of_md": false, 00:23:54.752 "dif_pi_format": 0 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "bdev_wait_for_examine" 00:23:54.752 } 00:23:54.752 ] 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "subsystem": "nbd", 00:23:54.752 "config": [] 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "subsystem": "scheduler", 00:23:54.752 "config": [ 00:23:54.752 { 00:23:54.752 "method": "framework_set_scheduler", 00:23:54.752 "params": { 00:23:54.752 "name": "static" 00:23:54.752 } 00:23:54.752 } 00:23:54.752 ] 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "subsystem": "nvmf", 00:23:54.752 "config": [ 00:23:54.752 { 00:23:54.752 "method": "nvmf_set_config", 00:23:54.752 "params": { 00:23:54.752 "discovery_filter": "match_any", 00:23:54.752 "admin_cmd_passthru": { 00:23:54.752 "identify_ctrlr": false 00:23:54.752 }, 00:23:54.752 "dhchap_digests": [ 00:23:54.752 "sha256", 00:23:54.752 "sha384", 00:23:54.752 "sha512" 00:23:54.752 ], 00:23:54.752 "dhchap_dhgroups": [ 00:23:54.752 "null", 00:23:54.752 "ffdhe2048", 00:23:54.752 "ffdhe3072", 00:23:54.752 "ffdhe4096", 00:23:54.752 "ffdhe6144", 00:23:54.752 "ffdhe8192" 00:23:54.752 ] 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_set_max_subsystems", 00:23:54.752 "params": { 00:23:54.752 "max_subsystems": 1024 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_set_crdt", 00:23:54.752 "params": { 00:23:54.752 "crdt1": 0, 00:23:54.752 "crdt2": 0, 00:23:54.752 "crdt3": 0 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_create_transport", 00:23:54.752 "params": { 00:23:54.752 "trtype": "TCP", 00:23:54.752 "max_queue_depth": 128, 00:23:54.752 "max_io_qpairs_per_ctrlr": 127, 00:23:54.752 "in_capsule_data_size": 4096, 00:23:54.752 "max_io_size": 131072, 00:23:54.752 "io_unit_size": 131072, 00:23:54.752 "max_aq_depth": 128, 00:23:54.752 "num_shared_buffers": 511, 00:23:54.752 "buf_cache_size": 4294967295, 00:23:54.752 "dif_insert_or_strip": false, 00:23:54.752 "zcopy": false, 00:23:54.752 "c2h_success": false, 00:23:54.752 "sock_priority": 0, 00:23:54.752 "abort_timeout_sec": 1, 00:23:54.752 "ack_timeout": 0, 00:23:54.752 "data_wr_pool_size": 0 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_create_subsystem", 00:23:54.752 "params": { 00:23:54.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.752 "allow_any_host": false, 00:23:54.752 "serial_number": "00000000000000000000", 00:23:54.752 "model_number": "SPDK bdev Controller", 00:23:54.752 "max_namespaces": 32, 00:23:54.752 "min_cntlid": 1, 00:23:54.752 "max_cntlid": 65519, 00:23:54.752 "ana_reporting": false 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_subsystem_add_host", 00:23:54.752 "params": { 00:23:54.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.752 "host": "nqn.2016-06.io.spdk:host1", 00:23:54.752 "psk": "key0" 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_subsystem_add_ns", 00:23:54.752 "params": { 00:23:54.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.752 "namespace": { 00:23:54.752 "nsid": 1, 00:23:54.752 "bdev_name": "malloc0", 00:23:54.752 "nguid": "6187D85ACC3B47508379ECD8F37B2E49", 00:23:54.752 "uuid": "6187d85a-cc3b-4750-8379-ecd8f37b2e49", 00:23:54.752 "no_auto_visible": false 00:23:54.752 } 00:23:54.752 } 00:23:54.752 }, 00:23:54.752 { 00:23:54.752 "method": "nvmf_subsystem_add_listener", 00:23:54.752 "params": { 00:23:54.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.752 "listen_address": { 00:23:54.752 "trtype": "TCP", 00:23:54.752 "adrfam": "IPv4", 00:23:54.752 "traddr": "10.0.0.2", 00:23:54.752 "trsvcid": "4420" 00:23:54.752 }, 00:23:54.752 "secure_channel": false, 00:23:54.752 "sock_impl": "ssl" 00:23:54.752 } 00:23:54.752 } 00:23:54.752 ] 00:23:54.752 } 00:23:54.752 ] 00:23:54.752 }' 00:23:54.752 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:55.011 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:55.011 "subsystems": [ 00:23:55.011 { 00:23:55.011 "subsystem": "keyring", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "keyring_file_add_key", 00:23:55.011 "params": { 00:23:55.011 "name": "key0", 00:23:55.011 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "iobuf", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "iobuf_set_options", 00:23:55.011 "params": { 00:23:55.011 "small_pool_count": 8192, 00:23:55.011 "large_pool_count": 1024, 00:23:55.011 "small_bufsize": 8192, 00:23:55.011 "large_bufsize": 135168, 00:23:55.011 "enable_numa": false 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "sock", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "sock_set_default_impl", 00:23:55.011 "params": { 00:23:55.011 "impl_name": "posix" 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "sock_impl_set_options", 00:23:55.011 "params": { 00:23:55.011 "impl_name": "ssl", 00:23:55.011 "recv_buf_size": 4096, 00:23:55.011 "send_buf_size": 4096, 00:23:55.011 "enable_recv_pipe": true, 00:23:55.011 "enable_quickack": false, 00:23:55.011 "enable_placement_id": 0, 00:23:55.011 "enable_zerocopy_send_server": true, 00:23:55.011 "enable_zerocopy_send_client": false, 00:23:55.011 "zerocopy_threshold": 0, 00:23:55.011 "tls_version": 0, 00:23:55.011 "enable_ktls": false 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "sock_impl_set_options", 00:23:55.011 "params": { 00:23:55.011 "impl_name": "posix", 00:23:55.011 "recv_buf_size": 2097152, 00:23:55.011 "send_buf_size": 2097152, 00:23:55.011 "enable_recv_pipe": true, 00:23:55.011 "enable_quickack": false, 00:23:55.011 "enable_placement_id": 0, 00:23:55.011 "enable_zerocopy_send_server": true, 00:23:55.011 "enable_zerocopy_send_client": false, 00:23:55.011 "zerocopy_threshold": 0, 00:23:55.011 "tls_version": 0, 00:23:55.011 "enable_ktls": false 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "vmd", 00:23:55.011 "config": [] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "accel", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "accel_set_options", 00:23:55.011 "params": { 00:23:55.011 "small_cache_size": 128, 00:23:55.011 "large_cache_size": 16, 00:23:55.011 "task_count": 2048, 00:23:55.011 "sequence_count": 2048, 00:23:55.011 "buf_count": 2048 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "bdev", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "bdev_set_options", 00:23:55.011 "params": { 00:23:55.011 "bdev_io_pool_size": 65535, 00:23:55.011 "bdev_io_cache_size": 256, 00:23:55.011 "bdev_auto_examine": true, 00:23:55.011 "iobuf_small_cache_size": 128, 00:23:55.011 "iobuf_large_cache_size": 16 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_raid_set_options", 00:23:55.011 "params": { 00:23:55.011 "process_window_size_kb": 1024, 00:23:55.011 "process_max_bandwidth_mb_sec": 0 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_iscsi_set_options", 00:23:55.011 "params": { 00:23:55.011 "timeout_sec": 30 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_nvme_set_options", 00:23:55.011 "params": { 00:23:55.011 "action_on_timeout": "none", 00:23:55.011 "timeout_us": 0, 00:23:55.011 "timeout_admin_us": 0, 00:23:55.011 "keep_alive_timeout_ms": 10000, 00:23:55.011 "arbitration_burst": 0, 00:23:55.011 "low_priority_weight": 0, 00:23:55.011 "medium_priority_weight": 0, 00:23:55.011 "high_priority_weight": 0, 00:23:55.011 "nvme_adminq_poll_period_us": 10000, 00:23:55.011 "nvme_ioq_poll_period_us": 0, 00:23:55.011 "io_queue_requests": 512, 00:23:55.011 "delay_cmd_submit": true, 00:23:55.011 "transport_retry_count": 4, 00:23:55.011 "bdev_retry_count": 3, 00:23:55.011 "transport_ack_timeout": 0, 00:23:55.011 "ctrlr_loss_timeout_sec": 0, 00:23:55.011 "reconnect_delay_sec": 0, 00:23:55.011 "fast_io_fail_timeout_sec": 0, 00:23:55.011 "disable_auto_failback": false, 00:23:55.011 "generate_uuids": false, 00:23:55.011 "transport_tos": 0, 00:23:55.011 "nvme_error_stat": false, 00:23:55.011 "rdma_srq_size": 0, 00:23:55.011 "io_path_stat": false, 00:23:55.011 "allow_accel_sequence": false, 00:23:55.011 "rdma_max_cq_size": 0, 00:23:55.011 "rdma_cm_event_timeout_ms": 0, 00:23:55.011 "dhchap_digests": [ 00:23:55.011 "sha256", 00:23:55.011 "sha384", 00:23:55.011 "sha512" 00:23:55.011 ], 00:23:55.011 "dhchap_dhgroups": [ 00:23:55.011 "null", 00:23:55.011 "ffdhe2048", 00:23:55.011 "ffdhe3072", 00:23:55.011 "ffdhe4096", 00:23:55.011 "ffdhe6144", 00:23:55.011 "ffdhe8192" 00:23:55.011 ] 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_nvme_attach_controller", 00:23:55.011 "params": { 00:23:55.011 "name": "nvme0", 00:23:55.011 "trtype": "TCP", 00:23:55.011 "adrfam": "IPv4", 00:23:55.011 "traddr": "10.0.0.2", 00:23:55.011 "trsvcid": "4420", 00:23:55.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.011 "prchk_reftag": false, 00:23:55.011 "prchk_guard": false, 00:23:55.011 "ctrlr_loss_timeout_sec": 0, 00:23:55.011 "reconnect_delay_sec": 0, 00:23:55.011 "fast_io_fail_timeout_sec": 0, 00:23:55.011 "psk": "key0", 00:23:55.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.011 "hdgst": false, 00:23:55.011 "ddgst": false, 00:23:55.011 "multipath": "multipath" 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_nvme_set_hotplug", 00:23:55.011 "params": { 00:23:55.011 "period_us": 100000, 00:23:55.011 "enable": false 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_enable_histogram", 00:23:55.011 "params": { 00:23:55.011 "name": "nvme0n1", 00:23:55.011 "enable": true 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_wait_for_examine" 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "nbd", 00:23:55.011 "config": [] 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }' 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3861076 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3861076 ']' 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3861076 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3861076 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3861076' 00:23:55.012 killing process with pid 3861076 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3861076 00:23:55.012 Received shutdown signal, test time was about 1.000000 seconds 00:23:55.012 00:23:55.012 Latency(us) 00:23:55.012 [2024-11-02T10:35:55.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.012 [2024-11-02T10:35:55.414Z] =================================================================================================================== 00:23:55.012 [2024-11-02T10:35:55.414Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.012 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3861076 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3861024 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3861024 ']' 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3861024 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3861024 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3861024' 00:23:55.270 killing process with pid 3861024 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3861024 00:23:55.270 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3861024 00:23:55.528 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:55.528 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:55.528 "subsystems": [ 00:23:55.528 { 00:23:55.528 "subsystem": "keyring", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "keyring_file_add_key", 00:23:55.528 "params": { 00:23:55.528 "name": "key0", 00:23:55.528 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:55.528 } 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "iobuf", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "iobuf_set_options", 00:23:55.528 "params": { 00:23:55.528 "small_pool_count": 8192, 00:23:55.528 "large_pool_count": 1024, 00:23:55.528 "small_bufsize": 8192, 00:23:55.528 "large_bufsize": 135168, 00:23:55.528 "enable_numa": false 00:23:55.528 } 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "sock", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "sock_set_default_impl", 00:23:55.528 "params": { 00:23:55.528 "impl_name": "posix" 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "sock_impl_set_options", 00:23:55.528 "params": { 00:23:55.528 "impl_name": "ssl", 00:23:55.528 "recv_buf_size": 4096, 00:23:55.528 "send_buf_size": 4096, 00:23:55.528 "enable_recv_pipe": true, 00:23:55.528 "enable_quickack": false, 00:23:55.528 "enable_placement_id": 0, 00:23:55.528 "enable_zerocopy_send_server": true, 00:23:55.528 "enable_zerocopy_send_client": false, 00:23:55.528 "zerocopy_threshold": 0, 00:23:55.528 "tls_version": 0, 00:23:55.528 "enable_ktls": false 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "sock_impl_set_options", 00:23:55.528 "params": { 00:23:55.528 "impl_name": "posix", 00:23:55.528 "recv_buf_size": 2097152, 00:23:55.528 "send_buf_size": 2097152, 00:23:55.528 "enable_recv_pipe": true, 00:23:55.528 "enable_quickack": false, 00:23:55.528 "enable_placement_id": 0, 00:23:55.528 "enable_zerocopy_send_server": true, 00:23:55.528 "enable_zerocopy_send_client": false, 00:23:55.528 "zerocopy_threshold": 0, 00:23:55.528 "tls_version": 0, 00:23:55.528 "enable_ktls": false 00:23:55.528 } 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "vmd", 00:23:55.528 "config": [] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "accel", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "accel_set_options", 00:23:55.528 "params": { 00:23:55.528 "small_cache_size": 128, 00:23:55.528 "large_cache_size": 16, 00:23:55.528 "task_count": 2048, 00:23:55.528 "sequence_count": 2048, 00:23:55.528 "buf_count": 2048 00:23:55.528 } 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "bdev", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "bdev_set_options", 00:23:55.528 "params": { 00:23:55.528 "bdev_io_pool_size": 65535, 00:23:55.528 "bdev_io_cache_size": 256, 00:23:55.528 "bdev_auto_examine": true, 00:23:55.528 "iobuf_small_cache_size": 128, 00:23:55.528 "iobuf_large_cache_size": 16 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "bdev_raid_set_options", 00:23:55.528 "params": { 00:23:55.528 "process_window_size_kb": 1024, 00:23:55.528 "process_max_bandwidth_mb_sec": 0 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "bdev_iscsi_set_options", 00:23:55.528 "params": { 00:23:55.528 "timeout_sec": 30 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "bdev_nvme_set_options", 00:23:55.528 "params": { 00:23:55.528 "action_on_timeout": "none", 00:23:55.528 "timeout_us": 0, 00:23:55.528 "timeout_admin_us": 0, 00:23:55.528 "keep_alive_timeout_ms": 10000, 00:23:55.528 "arbitration_burst": 0, 00:23:55.528 "low_priority_weight": 0, 00:23:55.528 "medium_priority_weight": 0, 00:23:55.528 "high_priority_weight": 0, 00:23:55.528 "nvme_adminq_poll_period_us": 10000, 00:23:55.528 "nvme_ioq_poll_period_us": 0, 00:23:55.528 "io_queue_requests": 0, 00:23:55.528 "delay_cmd_submit": true, 00:23:55.528 "transport_retry_count": 4, 00:23:55.528 "bdev_retry_count": 3, 00:23:55.528 "transport_ack_timeout": 0, 00:23:55.528 "ctrlr_loss_timeout_sec": 0, 00:23:55.528 "reconnect_delay_sec": 0, 00:23:55.528 "fast_io_fail_timeout_sec": 0, 00:23:55.528 "disable_auto_failback": false, 00:23:55.528 "generate_uuids": false, 00:23:55.528 "transport_tos": 0, 00:23:55.528 "nvme_error_stat": false, 00:23:55.528 "rdma_srq_size": 0, 00:23:55.528 "io_path_stat": false, 00:23:55.528 "allow_accel_sequence": false, 00:23:55.528 "rdma_max_cq_size": 0, 00:23:55.528 "rdma_cm_event_timeout_ms": 0, 00:23:55.528 "dhchap_digests": [ 00:23:55.528 "sha256", 00:23:55.528 "sha384", 00:23:55.528 "sha512" 00:23:55.528 ], 00:23:55.528 "dhchap_dhgroups": [ 00:23:55.528 "null", 00:23:55.528 "ffdhe2048", 00:23:55.528 "ffdhe3072", 00:23:55.528 "ffdhe4096", 00:23:55.528 "ffdhe6144", 00:23:55.528 "ffdhe8192" 00:23:55.528 ] 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "bdev_nvme_set_hotplug", 00:23:55.528 "params": { 00:23:55.528 "period_us": 100000, 00:23:55.528 "enable": false 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "bdev_malloc_create", 00:23:55.528 "params": { 00:23:55.528 "name": "malloc0", 00:23:55.528 "num_blocks": 8192, 00:23:55.528 "block_size": 4096, 00:23:55.528 "physical_block_size": 4096, 00:23:55.528 "uuid": "6187d85a-cc3b-4750-8379-ecd8f37b2e49", 00:23:55.528 "optimal_io_boundary": 0, 00:23:55.528 "md_size": 0, 00:23:55.528 "dif_type": 0, 00:23:55.528 "dif_is_head_of_md": false, 00:23:55.528 "dif_pi_format": 0 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "bdev_wait_for_examine" 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "nbd", 00:23:55.528 "config": [] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "scheduler", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "framework_set_scheduler", 00:23:55.528 "params": { 00:23:55.528 "name": "static" 00:23:55.528 } 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "subsystem": "nvmf", 00:23:55.528 "config": [ 00:23:55.528 { 00:23:55.528 "method": "nvmf_set_config", 00:23:55.528 "params": { 00:23:55.528 "discovery_filter": "match_any", 00:23:55.528 "admin_cmd_passthru": { 00:23:55.528 "identify_ctrlr": false 00:23:55.528 }, 00:23:55.528 "dhchap_digests": [ 00:23:55.528 "sha256", 00:23:55.528 "sha384", 00:23:55.528 "sha512" 00:23:55.528 ], 00:23:55.528 "dhchap_dhgroups": [ 00:23:55.528 "null", 00:23:55.528 "ffdhe2048", 00:23:55.528 "ffdhe3072", 00:23:55.528 "ffdhe4096", 00:23:55.528 "ffdhe6144", 00:23:55.528 "ffdhe8192" 00:23:55.528 ] 00:23:55.528 } 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "method": "nvmf_set_max_subsystems", 00:23:55.528 "params": { 00:23:55.528 "max_subsystems": 1024 00:23:55.528 } 00:23:55.528 }, 00:23:55.529 { 00:23:55.529 "method": "nvmf_set_crdt", 00:23:55.529 "params": { 00:23:55.529 "crdt1": 0, 00:23:55.529 "crdt2": 0, 00:23:55.529 "crdt3": 0 00:23:55.529 } 00:23:55.529 }, 00:23:55.529 { 00:23:55.529 "method": "nvmf_create_transport", 00:23:55.529 "params": { 00:23:55.529 "trtype": "TCP", 00:23:55.529 "max_queue_depth": 128, 00:23:55.529 "max_io_qpairs_per_ctrlr": 127, 00:23:55.529 "in_capsule_data_size": 4096, 00:23:55.529 "max_io_size": 131072, 00:23:55.529 "io_unit_size": 131072, 00:23:55.529 "max_aq_depth": 128, 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.529 "num_shared_buffers": 511, 00:23:55.529 "buf_cache_size": 4294967295, 00:23:55.529 "dif_insert_or_strip": false, 00:23:55.529 "zcopy": false, 00:23:55.529 "c2h_success": false, 00:23:55.529 "sock_priority": 0, 00:23:55.529 "abort_timeout_sec": 1, 00:23:55.529 "ack_timeout": 0, 00:23:55.529 "data_wr_pool_size": 0 00:23:55.529 } 00:23:55.529 }, 00:23:55.529 { 00:23:55.529 "method": "nvmf_create_subsystem", 00:23:55.529 "params": { 00:23:55.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.529 "allow_any_host": false, 00:23:55.529 "serial_number": "00000000000000000000", 00:23:55.529 "model_number": "SPDK bdev Controller", 00:23:55.529 "max_namespaces": 32, 00:23:55.529 "min_cntlid": 1, 00:23:55.529 "max_cntlid": 65519, 00:23:55.529 "ana_reporting": false 00:23:55.529 } 00:23:55.529 }, 00:23:55.529 { 00:23:55.529 "method": "nvmf_subsystem_add_host", 00:23:55.529 "params": { 00:23:55.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.529 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.529 "psk": "key0" 00:23:55.529 } 00:23:55.529 }, 00:23:55.529 { 00:23:55.529 "method": "nvmf_subsystem_add_ns", 00:23:55.529 "params": { 00:23:55.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.529 "namespace": { 00:23:55.529 "nsid": 1, 00:23:55.529 "bdev_name": "malloc0", 00:23:55.529 "nguid": "6187D85ACC3B47508379ECD8F37B2E49", 00:23:55.529 "uuid": "6187d85a-cc3b-4750-8379-ecd8f37b2e49", 00:23:55.529 "no_auto_visible": false 00:23:55.529 } 00:23:55.529 } 00:23:55.529 }, 00:23:55.529 { 00:23:55.529 "method": "nvmf_subsystem_add_listener", 00:23:55.529 "params": { 00:23:55.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.529 "listen_address": { 00:23:55.529 "trtype": "TCP", 00:23:55.529 "adrfam": "IPv4", 00:23:55.529 "traddr": "10.0.0.2", 00:23:55.529 "trsvcid": "4420" 00:23:55.529 }, 00:23:55.529 "secure_channel": false, 00:23:55.529 "sock_impl": "ssl" 00:23:55.529 } 00:23:55.529 } 00:23:55.529 ] 00:23:55.529 } 00:23:55.529 ] 00:23:55.529 }' 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3861485 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3861485 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3861485 ']' 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.529 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.529 [2024-11-02 11:35:55.825324] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:55.529 [2024-11-02 11:35:55.825422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.529 [2024-11-02 11:35:55.896951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.787 [2024-11-02 11:35:55.941423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.787 [2024-11-02 11:35:55.941473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.787 [2024-11-02 11:35:55.941496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.787 [2024-11-02 11:35:55.941515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.787 [2024-11-02 11:35:55.941526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.787 [2024-11-02 11:35:55.942113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.787 [2024-11-02 11:35:56.185311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.044 [2024-11-02 11:35:56.217311] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.044 [2024-11-02 11:35:56.217543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3861607 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3861607 /var/tmp/bdevperf.sock 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3861607 ']' 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.609 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:56.609 "subsystems": [ 00:23:56.609 { 00:23:56.609 "subsystem": "keyring", 00:23:56.609 "config": [ 00:23:56.609 { 00:23:56.609 "method": "keyring_file_add_key", 00:23:56.609 "params": { 00:23:56.609 "name": "key0", 00:23:56.609 "path": "/tmp/tmp.dDr9YiJYYW" 00:23:56.609 } 00:23:56.609 } 00:23:56.609 ] 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "subsystem": "iobuf", 00:23:56.609 "config": [ 00:23:56.609 { 00:23:56.609 "method": "iobuf_set_options", 00:23:56.609 "params": { 00:23:56.609 "small_pool_count": 8192, 00:23:56.609 "large_pool_count": 1024, 00:23:56.609 "small_bufsize": 8192, 00:23:56.609 "large_bufsize": 135168, 00:23:56.609 "enable_numa": false 00:23:56.609 } 00:23:56.609 } 00:23:56.609 ] 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "subsystem": "sock", 00:23:56.609 "config": [ 00:23:56.609 { 00:23:56.609 "method": "sock_set_default_impl", 00:23:56.609 "params": { 00:23:56.609 "impl_name": "posix" 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "sock_impl_set_options", 00:23:56.609 "params": { 00:23:56.609 "impl_name": "ssl", 00:23:56.609 "recv_buf_size": 4096, 00:23:56.609 "send_buf_size": 4096, 00:23:56.609 "enable_recv_pipe": true, 00:23:56.609 "enable_quickack": false, 00:23:56.609 "enable_placement_id": 0, 00:23:56.609 "enable_zerocopy_send_server": true, 00:23:56.609 "enable_zerocopy_send_client": false, 00:23:56.609 "zerocopy_threshold": 0, 00:23:56.609 "tls_version": 0, 00:23:56.609 "enable_ktls": false 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "sock_impl_set_options", 00:23:56.609 "params": { 00:23:56.609 "impl_name": "posix", 00:23:56.609 "recv_buf_size": 2097152, 00:23:56.609 "send_buf_size": 2097152, 00:23:56.609 "enable_recv_pipe": true, 00:23:56.609 "enable_quickack": false, 00:23:56.609 "enable_placement_id": 0, 00:23:56.609 "enable_zerocopy_send_server": true, 00:23:56.609 "enable_zerocopy_send_client": false, 00:23:56.609 "zerocopy_threshold": 0, 00:23:56.609 "tls_version": 0, 00:23:56.609 "enable_ktls": false 00:23:56.609 } 00:23:56.609 } 00:23:56.609 ] 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "subsystem": "vmd", 00:23:56.609 "config": [] 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "subsystem": "accel", 00:23:56.609 "config": [ 00:23:56.609 { 00:23:56.609 "method": "accel_set_options", 00:23:56.609 "params": { 00:23:56.609 "small_cache_size": 128, 00:23:56.609 "large_cache_size": 16, 00:23:56.609 "task_count": 2048, 00:23:56.609 "sequence_count": 2048, 00:23:56.609 "buf_count": 2048 00:23:56.609 } 00:23:56.609 } 00:23:56.609 ] 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "subsystem": "bdev", 00:23:56.609 "config": [ 00:23:56.609 { 00:23:56.609 "method": "bdev_set_options", 00:23:56.609 "params": { 00:23:56.609 "bdev_io_pool_size": 65535, 00:23:56.609 "bdev_io_cache_size": 256, 00:23:56.609 "bdev_auto_examine": true, 00:23:56.609 "iobuf_small_cache_size": 128, 00:23:56.609 "iobuf_large_cache_size": 16 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "bdev_raid_set_options", 00:23:56.609 "params": { 00:23:56.609 "process_window_size_kb": 1024, 00:23:56.609 "process_max_bandwidth_mb_sec": 0 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "bdev_iscsi_set_options", 00:23:56.609 "params": { 00:23:56.609 "timeout_sec": 30 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "bdev_nvme_set_options", 00:23:56.609 "params": { 00:23:56.609 "action_on_timeout": "none", 00:23:56.609 "timeout_us": 0, 00:23:56.609 "timeout_admin_us": 0, 00:23:56.609 "keep_alive_timeout_ms": 10000, 00:23:56.609 "arbitration_burst": 0, 00:23:56.609 "low_priority_weight": 0, 00:23:56.609 "medium_priority_weight": 0, 00:23:56.609 "high_priority_weight": 0, 00:23:56.609 "nvme_adminq_poll_period_us": 10000, 00:23:56.609 "nvme_ioq_poll_period_us": 0, 00:23:56.609 "io_queue_requests": 512, 00:23:56.609 "delay_cmd_submit": true, 00:23:56.609 "transport_retry_count": 4, 00:23:56.609 "bdev_retry_count": 3, 00:23:56.609 "transport_ack_timeout": 0, 00:23:56.609 "ctrlr_loss_timeout_sec": 0, 00:23:56.609 "reconnect_delay_sec": 0, 00:23:56.609 "fast_io_fail_timeout_sec": 0, 00:23:56.609 "disable_auto_failback": false, 00:23:56.609 "generate_uuids": false, 00:23:56.609 "transport_tos": 0, 00:23:56.609 "nvme_error_stat": false, 00:23:56.609 "rdma_srq_size": 0, 00:23:56.609 "io_path_stat": false, 00:23:56.609 "allow_accel_sequence": false, 00:23:56.609 "rdma_max_cq_size": 0, 00:23:56.609 "rdma_cm_event_timeout_ms": 0, 00:23:56.609 "dhchap_digests": [ 00:23:56.609 "sha256", 00:23:56.609 "sha384", 00:23:56.609 "sha512" 00:23:56.609 ], 00:23:56.609 "dhchap_dhgroups": [ 00:23:56.609 "null", 00:23:56.609 "ffdhe2048", 00:23:56.609 "ffdhe3072", 00:23:56.609 "ffdhe4096", 00:23:56.609 "ffdhe6144", 00:23:56.609 "ffdhe8192" 00:23:56.609 ] 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "bdev_nvme_attach_controller", 00:23:56.609 "params": { 00:23:56.609 "name": "nvme0", 00:23:56.609 "trtype": "TCP", 00:23:56.609 "adrfam": "IPv4", 00:23:56.609 "traddr": "10.0.0.2", 00:23:56.609 "trsvcid": "4420", 00:23:56.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.609 "prchk_reftag": false, 00:23:56.609 "prchk_guard": false, 00:23:56.609 "ctrlr_loss_timeout_sec": 0, 00:23:56.609 "reconnect_delay_sec": 0, 00:23:56.609 "fast_io_fail_timeout_sec": 0, 00:23:56.609 "psk": "key0", 00:23:56.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.609 "hdgst": false, 00:23:56.609 "ddgst": false, 00:23:56.609 "multipath": "multipath" 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "bdev_nvme_set_hotplug", 00:23:56.609 "params": { 00:23:56.609 "period_us": 100000, 00:23:56.609 "enable": false 00:23:56.609 } 00:23:56.609 }, 00:23:56.609 { 00:23:56.609 "method": "bdev_enable_histogram", 00:23:56.610 "params": { 00:23:56.610 "name": "nvme0n1", 00:23:56.610 "enable": true 00:23:56.610 } 00:23:56.610 }, 00:23:56.610 { 00:23:56.610 "method": "bdev_wait_for_examine" 00:23:56.610 } 00:23:56.610 ] 00:23:56.610 }, 00:23:56.610 { 00:23:56.610 "subsystem": "nbd", 00:23:56.610 "config": [] 00:23:56.610 } 00:23:56.610 ] 00:23:56.610 }' 00:23:56.610 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.610 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.610 [2024-11-02 11:35:56.929124] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:23:56.610 [2024-11-02 11:35:56.929226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861607 ] 00:23:56.610 [2024-11-02 11:35:57.001371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.867 [2024-11-02 11:35:57.051058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.867 [2024-11-02 11:35:57.234513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.125 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.125 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:57.125 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:57.125 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:57.383 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.384 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.384 Running I/O for 1 seconds... 00:23:58.756 3131.00 IOPS, 12.23 MiB/s 00:23:58.756 Latency(us) 00:23:58.756 [2024-11-02T10:35:59.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.756 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.756 Verification LBA range: start 0x0 length 0x2000 00:23:58.756 nvme0n1 : 1.04 3146.21 12.29 0.00 0.00 40019.06 6553.60 78837.38 00:23:58.756 [2024-11-02T10:35:59.158Z] =================================================================================================================== 00:23:58.756 [2024-11-02T10:35:59.158Z] Total : 3146.21 12.29 0.00 0.00 40019.06 6553.60 78837.38 00:23:58.756 { 00:23:58.756 "results": [ 00:23:58.756 { 00:23:58.756 "job": "nvme0n1", 00:23:58.756 "core_mask": "0x2", 00:23:58.756 "workload": "verify", 00:23:58.756 "status": "finished", 00:23:58.756 "verify_range": { 00:23:58.756 "start": 0, 00:23:58.756 "length": 8192 00:23:58.756 }, 00:23:58.756 "queue_depth": 128, 00:23:58.756 "io_size": 4096, 00:23:58.756 "runtime": 1.03585, 00:23:58.756 "iops": 3146.208427861177, 00:23:58.756 "mibps": 12.289876671332722, 00:23:58.756 "io_failed": 0, 00:23:58.756 "io_timeout": 0, 00:23:58.756 "avg_latency_us": 40019.06278181219, 00:23:58.756 "min_latency_us": 6553.6, 00:23:58.756 "max_latency_us": 78837.38074074074 00:23:58.756 } 00:23:58.756 ], 00:23:58.756 "core_count": 1 00:23:58.756 } 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:58.756 nvmf_trace.0 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3861607 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3861607 ']' 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3861607 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3861607 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3861607' 00:23:58.756 killing process with pid 3861607 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3861607 00:23:58.756 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.756 00:23:58.756 Latency(us) 00:23:58.756 [2024-11-02T10:35:59.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.756 [2024-11-02T10:35:59.158Z] =================================================================================================================== 00:23:58.756 [2024-11-02T10:35:59.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.756 11:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3861607 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.756 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.756 rmmod nvme_tcp 00:23:58.756 rmmod nvme_fabrics 00:23:58.756 rmmod nvme_keyring 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3861485 ']' 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3861485 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3861485 ']' 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3861485 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3861485 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3861485' 00:23:59.014 killing process with pid 3861485 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3861485 00:23:59.014 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3861485 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.273 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.HcdtHDQnal /tmp/tmp.RAsPyUWqxN /tmp/tmp.dDr9YiJYYW 00:24:01.174 00:24:01.174 real 1m21.923s 00:24:01.174 user 2m9.616s 00:24:01.174 sys 0m27.189s 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.174 ************************************ 00:24:01.174 END TEST nvmf_tls 00:24:01.174 ************************************ 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.174 ************************************ 00:24:01.174 START TEST nvmf_fips 00:24:01.174 ************************************ 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:01.174 * Looking for test storage... 00:24:01.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:01.174 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:01.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.433 --rc genhtml_branch_coverage=1 00:24:01.433 --rc genhtml_function_coverage=1 00:24:01.433 --rc genhtml_legend=1 00:24:01.433 --rc geninfo_all_blocks=1 00:24:01.433 --rc geninfo_unexecuted_blocks=1 00:24:01.433 00:24:01.433 ' 00:24:01.433 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:01.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.433 --rc genhtml_branch_coverage=1 00:24:01.433 --rc genhtml_function_coverage=1 00:24:01.433 --rc genhtml_legend=1 00:24:01.433 --rc geninfo_all_blocks=1 00:24:01.433 --rc geninfo_unexecuted_blocks=1 00:24:01.433 00:24:01.434 ' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:01.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.434 --rc genhtml_branch_coverage=1 00:24:01.434 --rc genhtml_function_coverage=1 00:24:01.434 --rc genhtml_legend=1 00:24:01.434 --rc geninfo_all_blocks=1 00:24:01.434 --rc geninfo_unexecuted_blocks=1 00:24:01.434 00:24:01.434 ' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:01.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.434 --rc genhtml_branch_coverage=1 00:24:01.434 --rc genhtml_function_coverage=1 00:24:01.434 --rc genhtml_legend=1 00:24:01.434 --rc geninfo_all_blocks=1 00:24:01.434 --rc geninfo_unexecuted_blocks=1 00:24:01.434 00:24:01.434 ' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:01.434 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:01.435 Error setting digest 00:24:01.435 4002FABC237F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:01.435 4002FABC237F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.435 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:03.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:03.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:03.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:03.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.333 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.334 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:24:03.592 00:24:03.592 --- 10.0.0.2 ping statistics --- 00:24:03.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.592 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:24:03.592 00:24:03.592 --- 10.0.0.1 ping statistics --- 00:24:03.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.592 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3863983 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3863983 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3863983 ']' 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.592 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.592 [2024-11-02 11:36:03.967953] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:24:03.592 [2024-11-02 11:36:03.968045] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.850 [2024-11-02 11:36:04.042669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.850 [2024-11-02 11:36:04.089327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.850 [2024-11-02 11:36:04.089396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.850 [2024-11-02 11:36:04.089417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.850 [2024-11-02 11:36:04.089429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.850 [2024-11-02 11:36:04.089440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.850 [2024-11-02 11:36:04.090037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.OIK 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.OIK 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.OIK 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.OIK 00:24:03.850 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.414 [2024-11-02 11:36:04.509724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.414 [2024-11-02 11:36:04.525729] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.414 [2024-11-02 11:36:04.525991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.414 malloc0 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3864018 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3864018 /var/tmp/bdevperf.sock 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3864018 ']' 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:04.414 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.414 [2024-11-02 11:36:04.662292] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:24:04.414 [2024-11-02 11:36:04.662380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864018 ] 00:24:04.414 [2024-11-02 11:36:04.732961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.414 [2024-11-02 11:36:04.783835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.671 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:04.671 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:04.671 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.OIK 00:24:04.927 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.184 [2024-11-02 11:36:05.426656] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.184 TLSTESTn1 00:24:05.184 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.440 Running I/O for 10 seconds... 00:24:07.422 3175.00 IOPS, 12.40 MiB/s [2024-11-02T10:36:08.757Z] 3271.50 IOPS, 12.78 MiB/s [2024-11-02T10:36:09.690Z] 3330.67 IOPS, 13.01 MiB/s [2024-11-02T10:36:11.064Z] 3378.00 IOPS, 13.20 MiB/s [2024-11-02T10:36:11.996Z] 3351.80 IOPS, 13.09 MiB/s [2024-11-02T10:36:12.928Z] 3365.33 IOPS, 13.15 MiB/s [2024-11-02T10:36:13.861Z] 3374.71 IOPS, 13.18 MiB/s [2024-11-02T10:36:14.794Z] 3390.62 IOPS, 13.24 MiB/s [2024-11-02T10:36:15.728Z] 3403.00 IOPS, 13.29 MiB/s [2024-11-02T10:36:15.728Z] 3405.70 IOPS, 13.30 MiB/s 00:24:15.326 Latency(us) 00:24:15.326 [2024-11-02T10:36:15.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.326 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:15.326 Verification LBA range: start 0x0 length 0x2000 00:24:15.326 TLSTESTn1 : 10.04 3404.91 13.30 0.00 0.00 37500.83 8738.13 53205.52 00:24:15.326 [2024-11-02T10:36:15.728Z] =================================================================================================================== 00:24:15.326 [2024-11-02T10:36:15.728Z] Total : 3404.91 13.30 0.00 0.00 37500.83 8738.13 53205.52 00:24:15.326 { 00:24:15.326 "results": [ 00:24:15.326 { 00:24:15.326 "job": "TLSTESTn1", 00:24:15.326 "core_mask": "0x4", 00:24:15.326 "workload": "verify", 00:24:15.326 "status": "finished", 00:24:15.326 "verify_range": { 00:24:15.326 "start": 0, 00:24:15.326 "length": 8192 00:24:15.326 }, 00:24:15.326 "queue_depth": 128, 00:24:15.326 "io_size": 4096, 00:24:15.326 "runtime": 10.039317, 00:24:15.326 "iops": 3404.912903935596, 00:24:15.326 "mibps": 13.300441030998423, 00:24:15.326 "io_failed": 0, 00:24:15.326 "io_timeout": 0, 00:24:15.326 "avg_latency_us": 37500.8289197251, 00:24:15.326 "min_latency_us": 8738.133333333333, 00:24:15.326 "max_latency_us": 53205.52296296296 00:24:15.326 } 00:24:15.326 ], 00:24:15.326 "core_count": 1 00:24:15.326 } 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:15.326 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:15.326 nvmf_trace.0 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3864018 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3864018 ']' 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3864018 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3864018 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3864018' 00:24:15.584 killing process with pid 3864018 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3864018 00:24:15.584 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.584 00:24:15.584 Latency(us) 00:24:15.584 [2024-11-02T10:36:15.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.584 [2024-11-02T10:36:15.986Z] =================================================================================================================== 00:24:15.584 [2024-11-02T10:36:15.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3864018 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:15.584 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.585 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:15.585 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.585 11:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.843 rmmod nvme_tcp 00:24:15.843 rmmod nvme_fabrics 00:24:15.843 rmmod nvme_keyring 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3863983 ']' 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3863983 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3863983 ']' 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3863983 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3863983 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3863983' 00:24:15.843 killing process with pid 3863983 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3863983 00:24:15.843 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3863983 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.103 11:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.OIK 00:24:18.006 00:24:18.006 real 0m16.830s 00:24:18.006 user 0m21.906s 00:24:18.006 sys 0m5.880s 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:18.006 ************************************ 00:24:18.006 END TEST nvmf_fips 00:24:18.006 ************************************ 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.006 ************************************ 00:24:18.006 START TEST nvmf_control_msg_list 00:24:18.006 ************************************ 00:24:18.006 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:18.266 * Looking for test storage... 00:24:18.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.266 --rc genhtml_branch_coverage=1 00:24:18.266 --rc genhtml_function_coverage=1 00:24:18.266 --rc genhtml_legend=1 00:24:18.266 --rc geninfo_all_blocks=1 00:24:18.266 --rc geninfo_unexecuted_blocks=1 00:24:18.266 00:24:18.266 ' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.266 --rc genhtml_branch_coverage=1 00:24:18.266 --rc genhtml_function_coverage=1 00:24:18.266 --rc genhtml_legend=1 00:24:18.266 --rc geninfo_all_blocks=1 00:24:18.266 --rc geninfo_unexecuted_blocks=1 00:24:18.266 00:24:18.266 ' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.266 --rc genhtml_branch_coverage=1 00:24:18.266 --rc genhtml_function_coverage=1 00:24:18.266 --rc genhtml_legend=1 00:24:18.266 --rc geninfo_all_blocks=1 00:24:18.266 --rc geninfo_unexecuted_blocks=1 00:24:18.266 00:24:18.266 ' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.266 --rc genhtml_branch_coverage=1 00:24:18.266 --rc genhtml_function_coverage=1 00:24:18.266 --rc genhtml_legend=1 00:24:18.266 --rc geninfo_all_blocks=1 00:24:18.266 --rc geninfo_unexecuted_blocks=1 00:24:18.266 00:24:18.266 ' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.266 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.267 11:36:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.801 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:20.802 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:20.802 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:20.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:20.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:24:20.802 00:24:20.802 --- 10.0.0.2 ping statistics --- 00:24:20.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.802 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:24:20.802 00:24:20.802 --- 10.0.0.1 ping statistics --- 00:24:20.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.802 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3867903 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3867903 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3867903 ']' 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:20.802 11:36:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.802 [2024-11-02 11:36:20.883707] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:24:20.802 [2024-11-02 11:36:20.883790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.802 [2024-11-02 11:36:20.960376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.802 [2024-11-02 11:36:21.009791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.802 [2024-11-02 11:36:21.009844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.802 [2024-11-02 11:36:21.009872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.802 [2024-11-02 11:36:21.009883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.802 [2024-11-02 11:36:21.009892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.802 [2024-11-02 11:36:21.010447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 [2024-11-02 11:36:21.160523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 Malloc0 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.803 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.803 [2024-11-02 11:36:21.200783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.061 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.061 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3867929 00:24:21.061 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.061 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3867930 00:24:21.062 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.062 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3867931 00:24:21.062 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3867929 00:24:21.062 11:36:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.062 [2024-11-02 11:36:21.269523] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.062 [2024-11-02 11:36:21.269840] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.062 [2024-11-02 11:36:21.279285] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.997 Initializing NVMe Controllers 00:24:21.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:21.997 Initialization complete. Launching workers. 00:24:21.997 ======================================================== 00:24:21.997 Latency(us) 00:24:21.997 Device Information : IOPS MiB/s Average min max 00:24:21.997 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2819.00 11.01 354.34 309.07 781.44 00:24:21.997 ======================================================== 00:24:21.997 Total : 2819.00 11.01 354.34 309.07 781.44 00:24:21.997 00:24:21.997 Initializing NVMe Controllers 00:24:21.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:21.997 Initialization complete. Launching workers. 00:24:21.997 ======================================================== 00:24:21.997 Latency(us) 00:24:21.997 Device Information : IOPS MiB/s Average min max 00:24:21.997 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3439.00 13.43 290.42 224.04 602.50 00:24:21.997 ======================================================== 00:24:21.997 Total : 3439.00 13.43 290.42 224.04 602.50 00:24:21.997 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3867930 00:24:21.997 Initializing NVMe Controllers 00:24:21.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:21.997 Initialization complete. Launching workers. 00:24:21.997 ======================================================== 00:24:21.997 Latency(us) 00:24:21.997 Device Information : IOPS MiB/s Average min max 00:24:21.997 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41807.43 40894.42 42016.26 00:24:21.997 ======================================================== 00:24:21.997 Total : 24.00 0.09 41807.43 40894.42 42016.26 00:24:21.997 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3867931 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.997 rmmod nvme_tcp 00:24:21.997 rmmod nvme_fabrics 00:24:21.997 rmmod nvme_keyring 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3867903 ']' 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3867903 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3867903 ']' 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3867903 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.997 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3867903 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3867903' 00:24:22.256 killing process with pid 3867903 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3867903 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3867903 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:22.256 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:22.515 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:22.515 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:22.515 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.515 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.515 11:36:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:24.422 00:24:24.422 real 0m6.325s 00:24:24.422 user 0m5.360s 00:24:24.422 sys 0m2.595s 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:24.422 ************************************ 00:24:24.422 END TEST nvmf_control_msg_list 00:24:24.422 ************************************ 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:24.422 ************************************ 00:24:24.422 START TEST nvmf_wait_for_buf 00:24:24.422 ************************************ 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:24.422 * Looking for test storage... 00:24:24.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:24.422 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.681 --rc genhtml_branch_coverage=1 00:24:24.681 --rc genhtml_function_coverage=1 00:24:24.681 --rc genhtml_legend=1 00:24:24.681 --rc geninfo_all_blocks=1 00:24:24.681 --rc geninfo_unexecuted_blocks=1 00:24:24.681 00:24:24.681 ' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.681 --rc genhtml_branch_coverage=1 00:24:24.681 --rc genhtml_function_coverage=1 00:24:24.681 --rc genhtml_legend=1 00:24:24.681 --rc geninfo_all_blocks=1 00:24:24.681 --rc geninfo_unexecuted_blocks=1 00:24:24.681 00:24:24.681 ' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.681 --rc genhtml_branch_coverage=1 00:24:24.681 --rc genhtml_function_coverage=1 00:24:24.681 --rc genhtml_legend=1 00:24:24.681 --rc geninfo_all_blocks=1 00:24:24.681 --rc geninfo_unexecuted_blocks=1 00:24:24.681 00:24:24.681 ' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.681 --rc genhtml_branch_coverage=1 00:24:24.681 --rc genhtml_function_coverage=1 00:24:24.681 --rc genhtml_legend=1 00:24:24.681 --rc geninfo_all_blocks=1 00:24:24.681 --rc geninfo_unexecuted_blocks=1 00:24:24.681 00:24:24.681 ' 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.681 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.682 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:26.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:26.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:26.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:26.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.586 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.587 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:24:26.845 00:24:26.845 --- 10.0.0.2 ping statistics --- 00:24:26.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.845 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:24:26.845 00:24:26.845 --- 10.0.0.1 ping statistics --- 00:24:26.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.845 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3870003 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3870003 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3870003 ']' 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:26.845 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.845 [2024-11-02 11:36:27.187414] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:24:26.845 [2024-11-02 11:36:27.187487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.104 [2024-11-02 11:36:27.259729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.104 [2024-11-02 11:36:27.306840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.104 [2024-11-02 11:36:27.306905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.104 [2024-11-02 11:36:27.306934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.104 [2024-11-02 11:36:27.306945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.104 [2024-11-02 11:36:27.306954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.104 [2024-11-02 11:36:27.307569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.104 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.364 Malloc0 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.364 [2024-11-02 11:36:27.560183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.364 [2024-11-02 11:36:27.584401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.364 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.364 [2024-11-02 11:36:27.683368] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:29.264 Initializing NVMe Controllers 00:24:29.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:29.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:29.264 Initialization complete. Launching workers. 00:24:29.264 ======================================================== 00:24:29.264 Latency(us) 00:24:29.264 Device Information : IOPS MiB/s Average min max 00:24:29.264 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33562.33 8012.20 71821.18 00:24:29.264 ======================================================== 00:24:29.264 Total : 124.00 15.50 33562.33 8012.20 71821.18 00:24:29.264 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.264 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.265 rmmod nvme_tcp 00:24:29.265 rmmod nvme_fabrics 00:24:29.265 rmmod nvme_keyring 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3870003 ']' 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3870003 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3870003 ']' 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3870003 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3870003 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3870003' 00:24:29.265 killing process with pid 3870003 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3870003 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3870003 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.265 11:36:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.799 00:24:31.799 real 0m6.827s 00:24:31.799 user 0m3.214s 00:24:31.799 sys 0m1.956s 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.799 ************************************ 00:24:31.799 END TEST nvmf_wait_for_buf 00:24:31.799 ************************************ 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.799 ************************************ 00:24:31.799 START TEST nvmf_fuzz 00:24:31.799 ************************************ 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:31.799 * Looking for test storage... 00:24:31.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.799 --rc genhtml_branch_coverage=1 00:24:31.799 --rc genhtml_function_coverage=1 00:24:31.799 --rc genhtml_legend=1 00:24:31.799 --rc geninfo_all_blocks=1 00:24:31.799 --rc geninfo_unexecuted_blocks=1 00:24:31.799 00:24:31.799 ' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.799 --rc genhtml_branch_coverage=1 00:24:31.799 --rc genhtml_function_coverage=1 00:24:31.799 --rc genhtml_legend=1 00:24:31.799 --rc geninfo_all_blocks=1 00:24:31.799 --rc geninfo_unexecuted_blocks=1 00:24:31.799 00:24:31.799 ' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.799 --rc genhtml_branch_coverage=1 00:24:31.799 --rc genhtml_function_coverage=1 00:24:31.799 --rc genhtml_legend=1 00:24:31.799 --rc geninfo_all_blocks=1 00:24:31.799 --rc geninfo_unexecuted_blocks=1 00:24:31.799 00:24:31.799 ' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.799 --rc genhtml_branch_coverage=1 00:24:31.799 --rc genhtml_function_coverage=1 00:24:31.799 --rc genhtml_legend=1 00:24:31.799 --rc geninfo_all_blocks=1 00:24:31.799 --rc geninfo_unexecuted_blocks=1 00:24:31.799 00:24:31.799 ' 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.799 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.800 11:36:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.703 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.703 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.703 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.703 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.703 11:36:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.703 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.703 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.703 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.703 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:24:33.962 00:24:33.962 --- 10.0.0.2 ping statistics --- 00:24:33.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.962 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:24:33.962 00:24:33.962 --- 10.0.0.1 ping statistics --- 00:24:33.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.962 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3872221 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3872221 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3872221 ']' 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.962 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.221 Malloc0 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:34.221 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:06.288 Fuzzing completed. Shutting down the fuzz application 00:25:06.289 00:25:06.289 Dumping successful admin opcodes: 00:25:06.289 8, 9, 10, 24, 00:25:06.289 Dumping successful io opcodes: 00:25:06.289 0, 9, 00:25:06.289 NS: 0x2000008eff00 I/O qp, Total commands completed: 470790, total successful commands: 2715, random_seed: 1827080128 00:25:06.289 NS: 0x2000008eff00 admin qp, Total commands completed: 56944, total successful commands: 453, random_seed: 3047541184 00:25:06.289 11:37:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:06.289 Fuzzing completed. Shutting down the fuzz application 00:25:06.289 00:25:06.289 Dumping successful admin opcodes: 00:25:06.289 24, 00:25:06.289 Dumping successful io opcodes: 00:25:06.289 00:25:06.289 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1901062528 00:25:06.289 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1901177392 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.289 rmmod nvme_tcp 00:25:06.289 rmmod nvme_fabrics 00:25:06.289 rmmod nvme_keyring 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3872221 ']' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3872221 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3872221 ']' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 3872221 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3872221 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3872221' 00:25:06.289 killing process with pid 3872221 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 3872221 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 3872221 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.289 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:08.852 00:25:08.852 real 0m37.075s 00:25:08.852 user 0m51.597s 00:25:08.852 sys 0m14.553s 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:08.852 ************************************ 00:25:08.852 END TEST nvmf_fuzz 00:25:08.852 ************************************ 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.852 ************************************ 00:25:08.852 START TEST nvmf_multiconnection 00:25:08.852 ************************************ 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:08.852 * Looking for test storage... 00:25:08.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:08.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.852 --rc genhtml_branch_coverage=1 00:25:08.852 --rc genhtml_function_coverage=1 00:25:08.852 --rc genhtml_legend=1 00:25:08.852 --rc geninfo_all_blocks=1 00:25:08.852 --rc geninfo_unexecuted_blocks=1 00:25:08.852 00:25:08.852 ' 00:25:08.852 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:08.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.852 --rc genhtml_branch_coverage=1 00:25:08.852 --rc genhtml_function_coverage=1 00:25:08.852 --rc genhtml_legend=1 00:25:08.852 --rc geninfo_all_blocks=1 00:25:08.852 --rc geninfo_unexecuted_blocks=1 00:25:08.853 00:25:08.853 ' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.853 --rc genhtml_branch_coverage=1 00:25:08.853 --rc genhtml_function_coverage=1 00:25:08.853 --rc genhtml_legend=1 00:25:08.853 --rc geninfo_all_blocks=1 00:25:08.853 --rc geninfo_unexecuted_blocks=1 00:25:08.853 00:25:08.853 ' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.853 --rc genhtml_branch_coverage=1 00:25:08.853 --rc genhtml_function_coverage=1 00:25:08.853 --rc genhtml_legend=1 00:25:08.853 --rc geninfo_all_blocks=1 00:25:08.853 --rc geninfo_unexecuted_blocks=1 00:25:08.853 00:25:08.853 ' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.853 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.752 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.753 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.753 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:25:10.753 00:25:10.753 --- 10.0.0.2 ping statistics --- 00:25:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.753 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:10.753 00:25:10.753 --- 10.0.0.1 ping statistics --- 00:25:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.753 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:10.753 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.754 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.754 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.754 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.754 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.754 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.754 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3877939 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3877939 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 3877939 ']' 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:11.012 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.012 [2024-11-02 11:37:11.212992] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:25:11.012 [2024-11-02 11:37:11.213066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.012 [2024-11-02 11:37:11.293915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.012 [2024-11-02 11:37:11.342456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.012 [2024-11-02 11:37:11.342526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.012 [2024-11-02 11:37:11.342553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.012 [2024-11-02 11:37:11.342576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.012 [2024-11-02 11:37:11.342586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.012 [2024-11-02 11:37:11.344182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.012 [2024-11-02 11:37:11.344283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.012 [2024-11-02 11:37:11.344314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.012 [2024-11-02 11:37:11.344317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.271 [2024-11-02 11:37:11.500095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.271 Malloc1 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.271 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 [2024-11-02 11:37:11.573110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 Malloc2 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 Malloc3 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.272 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 Malloc4 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 Malloc5 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 Malloc6 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.531 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 Malloc7 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 Malloc8 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.532 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 Malloc9 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 Malloc10 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 Malloc11 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.790 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:12.722 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:12.722 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:12.722 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.722 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:12.722 11:37:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.621 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:15.185 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:15.185 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:15.185 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.185 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:15.185 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.084 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:18.016 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:18.016 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:18.016 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.016 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:18.016 11:37:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.914 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:20.480 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:20.480 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:20.480 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.480 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:20.480 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.091 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:23.350 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:23.350 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:23.350 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.350 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:23.350 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:25.245 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:25.245 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:25.245 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:25:25.503 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:25.503 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.503 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:25.503 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.503 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:26.068 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:26.068 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:26.069 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.069 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:26.069 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.595 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:29.160 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:29.160 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:29.160 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.160 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:29.160 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.058 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:31.993 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:31.993 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:31.993 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.993 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:31.993 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.891 11:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:34.824 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:34.824 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:34.824 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.824 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:34.824 11:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.722 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:37.655 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:37.655 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:37.655 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.655 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:37.655 11:37:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.180 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:40.438 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:40.438 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:40.438 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.438 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:40.438 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:42.963 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:42.963 [global] 00:25:42.963 thread=1 00:25:42.963 invalidate=1 00:25:42.963 rw=read 00:25:42.963 time_based=1 00:25:42.963 runtime=10 00:25:42.963 ioengine=libaio 00:25:42.963 direct=1 00:25:42.963 bs=262144 00:25:42.963 iodepth=64 00:25:42.963 norandommap=1 00:25:42.963 numjobs=1 00:25:42.963 00:25:42.963 [job0] 00:25:42.963 filename=/dev/nvme0n1 00:25:42.963 [job1] 00:25:42.963 filename=/dev/nvme10n1 00:25:42.963 [job2] 00:25:42.963 filename=/dev/nvme1n1 00:25:42.963 [job3] 00:25:42.963 filename=/dev/nvme2n1 00:25:42.963 [job4] 00:25:42.963 filename=/dev/nvme3n1 00:25:42.963 [job5] 00:25:42.963 filename=/dev/nvme4n1 00:25:42.963 [job6] 00:25:42.963 filename=/dev/nvme5n1 00:25:42.963 [job7] 00:25:42.963 filename=/dev/nvme6n1 00:25:42.963 [job8] 00:25:42.963 filename=/dev/nvme7n1 00:25:42.963 [job9] 00:25:42.963 filename=/dev/nvme8n1 00:25:42.963 [job10] 00:25:42.963 filename=/dev/nvme9n1 00:25:42.963 Could not set queue depth (nvme0n1) 00:25:42.963 Could not set queue depth (nvme10n1) 00:25:42.963 Could not set queue depth (nvme1n1) 00:25:42.963 Could not set queue depth (nvme2n1) 00:25:42.963 Could not set queue depth (nvme3n1) 00:25:42.963 Could not set queue depth (nvme4n1) 00:25:42.963 Could not set queue depth (nvme5n1) 00:25:42.963 Could not set queue depth (nvme6n1) 00:25:42.963 Could not set queue depth (nvme7n1) 00:25:42.963 Could not set queue depth (nvme8n1) 00:25:42.963 Could not set queue depth (nvme9n1) 00:25:42.963 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.963 fio-3.35 00:25:42.963 Starting 11 threads 00:25:55.162 00:25:55.162 job0: (groupid=0, jobs=1): err= 0: pid=3882205: Sat Nov 2 11:37:53 2024 00:25:55.162 read: IOPS=133, BW=33.4MiB/s (35.0MB/s)(340MiB/10183msec) 00:25:55.162 slat (usec): min=8, max=766071, avg=3367.08, stdev=38250.23 00:25:55.162 clat (usec): min=1048, max=2016.1k, avg=475763.63, stdev=589127.74 00:25:55.162 lat (usec): min=1065, max=2139.1k, avg=479130.71, stdev=594159.02 00:25:55.162 clat percentiles (usec): 00:25:55.162 | 1.00th=[ 1500], 5.00th=[ 2737], 10.00th=[ 3523], 00:25:55.162 | 20.00th=[ 45351], 30.00th=[ 70779], 40.00th=[ 98042], 00:25:55.162 | 50.00th=[ 119014], 60.00th=[ 252707], 70.00th=[ 675283], 00:25:55.162 | 80.00th=[1166017], 90.00th=[1484784], 95.00th=[1669333], 00:25:55.162 | 99.00th=[1988101], 99.50th=[1988101], 99.90th=[2021655], 00:25:55.162 | 99.95th=[2021655], 99.99th=[2021655] 00:25:55.162 bw ( KiB/s): min= 4608, max=114688, per=4.79%, avg=33150.70, stdev=36898.75, samples=20 00:25:55.162 iops : min= 18, max= 448, avg=129.45, stdev=144.16, samples=20 00:25:55.162 lat (msec) : 2=3.38%, 4=7.87%, 10=1.47%, 20=2.43%, 50=7.36% 00:25:55.162 lat (msec) : 100=17.95%, 250=19.35%, 500=7.80%, 750=4.56%, 1000=6.70% 00:25:55.162 lat (msec) : 2000=20.82%, >=2000=0.29% 00:25:55.162 cpu : usr=0.07%, sys=0.44%, ctx=391, majf=0, minf=4097 00:25:55.162 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.4% 00:25:55.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.162 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.162 job1: (groupid=0, jobs=1): err= 0: pid=3882206: Sat Nov 2 11:37:53 2024 00:25:55.162 read: IOPS=83, BW=20.9MiB/s (21.9MB/s)(213MiB/10188msec) 00:25:55.162 slat (usec): min=10, max=1077.4k, avg=6662.27, stdev=61609.75 00:25:55.162 clat (msec): min=30, max=2628, avg=757.84, stdev=742.03 00:25:55.162 lat (msec): min=30, max=2628, avg=764.51, stdev=749.75 00:25:55.162 clat percentiles (msec): 00:25:55.162 | 1.00th=[ 34], 5.00th=[ 42], 10.00th=[ 101], 20.00th=[ 144], 00:25:55.162 | 30.00th=[ 213], 40.00th=[ 284], 50.00th=[ 342], 60.00th=[ 625], 00:25:55.162 | 70.00th=[ 1133], 80.00th=[ 1519], 90.00th=[ 1955], 95.00th=[ 2400], 00:25:55.162 | 99.00th=[ 2433], 99.50th=[ 2635], 99.90th=[ 2635], 99.95th=[ 2635], 00:25:55.162 | 99.99th=[ 2635] 00:25:55.162 bw ( KiB/s): min= 1536, max=88064, per=3.24%, avg=22439.44, stdev=20760.34, samples=18 00:25:55.162 iops : min= 6, max= 344, avg=87.61, stdev=81.08, samples=18 00:25:55.162 lat (msec) : 50=6.22%, 100=3.76%, 250=26.17%, 500=21.01%, 750=5.63% 00:25:55.162 lat (msec) : 1000=4.46%, 2000=23.71%, >=2000=9.04% 00:25:55.162 cpu : usr=0.00%, sys=0.33%, ctx=167, majf=0, minf=4097 00:25:55.162 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:25:55.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.162 issued rwts: total=852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.162 job2: (groupid=0, jobs=1): err= 0: pid=3882207: Sat Nov 2 11:37:53 2024 00:25:55.162 read: IOPS=45, BW=11.4MiB/s (12.0MB/s)(117MiB/10181msec) 00:25:55.162 slat (usec): min=14, max=1155.4k, avg=19091.96, stdev=87220.95 00:25:55.162 clat (msec): min=88, max=1968, avg=1377.98, stdev=347.44 00:25:55.162 lat (msec): min=213, max=2438, avg=1397.07, stdev=350.71 00:25:55.162 clat percentiles (msec): 00:25:55.162 | 1.00th=[ 213], 5.00th=[ 743], 10.00th=[ 927], 20.00th=[ 1083], 00:25:55.162 | 30.00th=[ 1200], 40.00th=[ 1301], 50.00th=[ 1469], 60.00th=[ 1552], 00:25:55.162 | 70.00th=[ 1620], 80.00th=[ 1687], 90.00th=[ 1737], 95.00th=[ 1787], 00:25:55.162 | 99.00th=[ 1955], 99.50th=[ 1955], 99.90th=[ 1972], 99.95th=[ 1972], 00:25:55.162 | 99.99th=[ 1972] 00:25:55.162 bw ( KiB/s): min= 2560, max=27081, per=1.75%, avg=12104.06, stdev=6899.65, samples=17 00:25:55.162 iops : min= 10, max= 105, avg=47.24, stdev=26.85, samples=17 00:25:55.162 lat (msec) : 100=0.21%, 250=1.07%, 500=1.50%, 750=3.00%, 1000=4.72% 00:25:55.162 lat (msec) : 2000=89.48% 00:25:55.162 cpu : usr=0.00%, sys=0.28%, ctx=58, majf=0, minf=4097 00:25:55.162 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:25:55.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.162 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:25:55.162 issued rwts: total=466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.162 job3: (groupid=0, jobs=1): err= 0: pid=3882208: Sat Nov 2 11:37:53 2024 00:25:55.162 read: IOPS=581, BW=145MiB/s (153MB/s)(1477MiB/10153msec) 00:25:55.162 slat (usec): min=13, max=137592, avg=1694.70, stdev=7305.58 00:25:55.162 clat (msec): min=26, max=848, avg=108.16, stdev=127.41 00:25:55.162 lat (msec): min=30, max=848, avg=109.86, stdev=129.28 00:25:55.162 clat percentiles (msec): 00:25:55.162 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 35], 00:25:55.162 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 50], 60.00th=[ 60], 00:25:55.162 | 70.00th=[ 99], 80.00th=[ 167], 90.00th=[ 264], 95.00th=[ 376], 00:25:55.162 | 99.00th=[ 667], 99.50th=[ 735], 99.90th=[ 827], 99.95th=[ 835], 00:25:55.162 | 99.99th=[ 852] 00:25:55.162 bw ( KiB/s): min=19968, max=461312, per=21.62%, avg=149626.15, stdev=146715.39, samples=20 00:25:55.162 iops : min= 78, max= 1802, avg=584.45, stdev=573.07, samples=20 00:25:55.162 lat (msec) : 50=51.14%, 100=19.00%, 250=19.04%, 500=8.60%, 750=1.74% 00:25:55.162 lat (msec) : 1000=0.47% 00:25:55.162 cpu : usr=0.34%, sys=2.07%, ctx=1070, majf=0, minf=4097 00:25:55.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:55.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.162 issued rwts: total=5909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.162 job4: (groupid=0, jobs=1): err= 0: pid=3882209: Sat Nov 2 11:37:53 2024 00:25:55.162 read: IOPS=46, BW=11.6MiB/s (12.2MB/s)(119MiB/10183msec) 00:25:55.162 slat (usec): min=14, max=1130.6k, avg=21091.97, stdev=91961.06 00:25:55.162 clat (msec): min=181, max=2272, avg=1352.31, stdev=309.07 00:25:55.162 lat (msec): min=620, max=2272, avg=1373.40, stdev=311.86 00:25:55.162 clat percentiles (msec): 00:25:55.162 | 1.00th=[ 617], 5.00th=[ 751], 10.00th=[ 936], 20.00th=[ 1070], 00:25:55.162 | 30.00th=[ 1234], 40.00th=[ 1351], 50.00th=[ 1401], 60.00th=[ 1435], 00:25:55.162 | 70.00th=[ 1502], 80.00th=[ 1620], 90.00th=[ 1737], 95.00th=[ 1787], 00:25:55.162 | 99.00th=[ 1821], 99.50th=[ 1905], 99.90th=[ 2265], 99.95th=[ 2265], 00:25:55.162 | 99.99th=[ 2265] 00:25:55.162 bw ( KiB/s): min= 2560, max=23552, per=1.69%, avg=11689.44, stdev=6137.29, samples=18 00:25:55.162 iops : min= 10, max= 92, avg=45.61, stdev=23.98, samples=18 00:25:55.162 lat (msec) : 250=0.21%, 750=4.22%, 1000=9.92%, 2000=85.23%, >=2000=0.42% 00:25:55.162 cpu : usr=0.04%, sys=0.25%, ctx=63, majf=0, minf=4097 00:25:55.162 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:25:55.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.162 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:25:55.162 issued rwts: total=474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.162 job5: (groupid=0, jobs=1): err= 0: pid=3882210: Sat Nov 2 11:37:53 2024 00:25:55.162 read: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10183msec) 00:25:55.162 slat (usec): min=9, max=1225.5k, avg=9821.68, stdev=62725.01 00:25:55.162 clat (msec): min=8, max=1981, avg=651.19, stdev=642.45 00:25:55.162 lat (msec): min=8, max=2529, avg=661.01, stdev=652.48 00:25:55.162 clat percentiles (msec): 00:25:55.162 | 1.00th=[ 21], 5.00th=[ 50], 10.00th=[ 69], 20.00th=[ 121], 00:25:55.162 | 30.00th=[ 134], 40.00th=[ 146], 50.00th=[ 180], 60.00th=[ 785], 00:25:55.162 | 70.00th=[ 1301], 80.00th=[ 1435], 90.00th=[ 1569], 95.00th=[ 1687], 00:25:55.162 | 99.00th=[ 1770], 99.50th=[ 1770], 99.90th=[ 1989], 99.95th=[ 1989], 00:25:55.162 | 99.99th=[ 1989] 00:25:55.162 bw ( KiB/s): min= 5632, max=131584, per=3.79%, avg=26195.17, stdev=36064.43, samples=18 00:25:55.162 iops : min= 22, max= 514, avg=102.28, stdev=140.89, samples=18 00:25:55.163 lat (msec) : 10=0.10%, 20=0.71%, 50=4.87%, 100=9.04%, 250=41.93% 00:25:55.163 lat (msec) : 500=2.34%, 1000=4.47%, 2000=36.55% 00:25:55.163 cpu : usr=0.00%, sys=0.40%, ctx=141, majf=0, minf=4098 00:25:55.163 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6% 00:25:55.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.163 issued rwts: total=985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.163 job6: (groupid=0, jobs=1): err= 0: pid=3882211: Sat Nov 2 11:37:53 2024 00:25:55.163 read: IOPS=348, BW=87.1MiB/s (91.4MB/s)(885MiB/10152msec) 00:25:55.163 slat (usec): min=10, max=777370, avg=1688.26, stdev=25007.41 00:25:55.163 clat (usec): min=1711, max=2059.1k, avg=181758.78, stdev=407632.87 00:25:55.163 lat (msec): min=3, max=2127, avg=183.45, stdev=412.50 00:25:55.163 clat percentiles (msec): 00:25:55.163 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 29], 00:25:55.163 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 38], 00:25:55.163 | 70.00th=[ 43], 80.00th=[ 92], 90.00th=[ 617], 95.00th=[ 1452], 00:25:55.163 | 99.00th=[ 1687], 99.50th=[ 1838], 99.90th=[ 1854], 99.95th=[ 1871], 00:25:55.163 | 99.99th=[ 2056] 00:25:55.163 bw ( KiB/s): min= 3584, max=428544, per=13.53%, avg=93615.16, stdev=141914.47, samples=19 00:25:55.163 iops : min= 14, max= 1674, avg=365.68, stdev=554.35, samples=19 00:25:55.163 lat (msec) : 2=0.03%, 4=0.06%, 10=0.08%, 20=0.57%, 50=74.84% 00:25:55.163 lat (msec) : 100=6.19%, 250=5.57%, 500=2.01%, 750=1.75%, 1000=0.99% 00:25:55.163 lat (msec) : 2000=7.89%, >=2000=0.03% 00:25:55.163 cpu : usr=0.23%, sys=1.34%, ctx=645, majf=0, minf=4097 00:25:55.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:55.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.163 issued rwts: total=3538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.163 job7: (groupid=0, jobs=1): err= 0: pid=3882212: Sat Nov 2 11:37:53 2024 00:25:55.163 read: IOPS=515, BW=129MiB/s (135MB/s)(1303MiB/10120msec) 00:25:55.163 slat (usec): min=13, max=177492, avg=1748.53, stdev=7505.65 00:25:55.163 clat (msec): min=26, max=695, avg=122.37, stdev=129.72 00:25:55.163 lat (msec): min=26, max=752, avg=124.11, stdev=131.41 00:25:55.163 clat percentiles (msec): 00:25:55.163 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 36], 00:25:55.163 | 30.00th=[ 38], 40.00th=[ 44], 50.00th=[ 60], 60.00th=[ 78], 00:25:55.163 | 70.00th=[ 136], 80.00th=[ 188], 90.00th=[ 321], 95.00th=[ 397], 00:25:55.163 | 99.00th=[ 609], 99.50th=[ 634], 99.90th=[ 693], 99.95th=[ 693], 00:25:55.163 | 99.99th=[ 693] 00:25:55.163 bw ( KiB/s): min=25600, max=465920, per=19.05%, avg=131800.00, stdev=131827.62, samples=20 00:25:55.163 iops : min= 100, max= 1820, avg=514.80, stdev=514.86, samples=20 00:25:55.163 lat (msec) : 50=42.57%, 100=21.96%, 250=19.05%, 500=14.06%, 750=2.36% 00:25:55.163 cpu : usr=0.35%, sys=1.69%, ctx=917, majf=0, minf=4097 00:25:55.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:55.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.163 issued rwts: total=5213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.163 job8: (groupid=0, jobs=1): err= 0: pid=3882215: Sat Nov 2 11:37:53 2024 00:25:55.163 read: IOPS=390, BW=97.6MiB/s (102MB/s)(988MiB/10117msec) 00:25:55.163 slat (usec): min=10, max=1337.4k, avg=1392.54, stdev=22246.34 00:25:55.163 clat (usec): min=1485, max=2154.1k, avg=162328.28, stdev=301179.63 00:25:55.163 lat (usec): min=1512, max=2463.7k, avg=163720.81, stdev=303433.24 00:25:55.163 clat percentiles (msec): 00:25:55.163 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 36], 00:25:55.163 | 30.00th=[ 44], 40.00th=[ 62], 50.00th=[ 106], 60.00th=[ 110], 00:25:55.163 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 266], 95.00th=[ 693], 00:25:55.163 | 99.00th=[ 1653], 99.50th=[ 1938], 99.90th=[ 1938], 99.95th=[ 2165], 00:25:55.163 | 99.99th=[ 2165] 00:25:55.163 bw ( KiB/s): min= 7680, max=241692, per=15.14%, avg=104745.89, stdev=72820.88, samples=19 00:25:55.163 iops : min= 30, max= 944, avg=409.16, stdev=284.45, samples=19 00:25:55.163 lat (msec) : 2=0.53%, 4=1.09%, 10=2.23%, 20=6.20%, 50=22.78% 00:25:55.163 lat (msec) : 100=11.54%, 250=45.36%, 500=3.04%, 750=2.38%, 1000=0.38% 00:25:55.163 lat (msec) : 2000=4.43%, >=2000=0.05% 00:25:55.163 cpu : usr=0.27%, sys=1.56%, ctx=1549, majf=0, minf=3721 00:25:55.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:55.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.163 issued rwts: total=3951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.163 job9: (groupid=0, jobs=1): err= 0: pid=3882216: Sat Nov 2 11:37:53 2024 00:25:55.163 read: IOPS=254, BW=63.7MiB/s (66.8MB/s)(647MiB/10152msec) 00:25:55.163 slat (usec): min=9, max=280602, avg=2480.13, stdev=11778.01 00:25:55.163 clat (msec): min=4, max=2105, avg=248.34, stdev=319.35 00:25:55.163 lat (msec): min=5, max=2105, avg=250.82, stdev=319.76 00:25:55.163 clat percentiles (msec): 00:25:55.163 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 42], 00:25:55.163 | 30.00th=[ 138], 40.00th=[ 157], 50.00th=[ 174], 60.00th=[ 188], 00:25:55.163 | 70.00th=[ 222], 80.00th=[ 321], 90.00th=[ 506], 95.00th=[ 894], 00:25:55.163 | 99.00th=[ 1804], 99.50th=[ 1921], 99.90th=[ 2106], 99.95th=[ 2106], 00:25:55.163 | 99.99th=[ 2106] 00:25:55.163 bw ( KiB/s): min=18944, max=165376, per=9.83%, avg=68004.79, stdev=37306.02, samples=19 00:25:55.163 iops : min= 74, max= 646, avg=265.63, stdev=145.72, samples=19 00:25:55.163 lat (msec) : 10=1.70%, 20=13.91%, 50=5.80%, 100=4.75%, 250=49.23% 00:25:55.163 lat (msec) : 500=14.18%, 750=4.17%, 1000=2.74%, 2000=3.36%, >=2000=0.15% 00:25:55.163 cpu : usr=0.11%, sys=0.95%, ctx=569, majf=0, minf=4098 00:25:55.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:25:55.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.163 issued rwts: total=2588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.163 job10: (groupid=0, jobs=1): err= 0: pid=3882219: Sat Nov 2 11:37:53 2024 00:25:55.163 read: IOPS=217, BW=54.5MiB/s (57.1MB/s)(551MiB/10120msec) 00:25:55.163 slat (usec): min=12, max=1132.7k, avg=4057.87, stdev=26776.08 00:25:55.163 clat (msec): min=2, max=1744, avg=289.40, stdev=245.34 00:25:55.163 lat (msec): min=2, max=1744, avg=293.45, stdev=246.44 00:25:55.163 clat percentiles (msec): 00:25:55.163 | 1.00th=[ 17], 5.00th=[ 79], 10.00th=[ 93], 20.00th=[ 129], 00:25:55.163 | 30.00th=[ 157], 40.00th=[ 207], 50.00th=[ 259], 60.00th=[ 305], 00:25:55.163 | 70.00th=[ 330], 80.00th=[ 368], 90.00th=[ 468], 95.00th=[ 550], 00:25:55.163 | 99.00th=[ 1552], 99.50th=[ 1552], 99.90th=[ 1569], 99.95th=[ 1569], 00:25:55.163 | 99.99th=[ 1754] 00:25:55.163 bw ( KiB/s): min=16351, max=113664, per=8.34%, avg=57719.53, stdev=26051.72, samples=19 00:25:55.163 iops : min= 63, max= 444, avg=225.42, stdev=101.84, samples=19 00:25:55.163 lat (msec) : 4=0.54%, 10=0.09%, 20=0.86%, 50=0.77%, 100=10.07% 00:25:55.163 lat (msec) : 250=36.42%, 500=43.22%, 750=5.17%, 2000=2.86% 00:25:55.163 cpu : usr=0.18%, sys=0.89%, ctx=406, majf=0, minf=4097 00:25:55.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:25:55.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:55.163 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:55.163 00:25:55.163 Run status group 0 (all jobs): 00:25:55.163 READ: bw=676MiB/s (709MB/s), 11.4MiB/s-145MiB/s (12.0MB/s-153MB/s), io=6885MiB (7219MB), run=10117-10188msec 00:25:55.163 00:25:55.163 Disk stats (read/write): 00:25:55.163 nvme0n1: ios=2587/0, merge=0/0, ticks=1187332/0, in_queue=1187332, util=97.33% 00:25:55.163 nvme10n1: ios=1577/0, merge=0/0, ticks=1205853/0, in_queue=1205853, util=97.54% 00:25:55.163 nvme1n1: ios=795/0, merge=0/0, ticks=1144054/0, in_queue=1144054, util=97.78% 00:25:55.163 nvme2n1: ios=11682/0, merge=0/0, ticks=1206739/0, in_queue=1206739, util=97.89% 00:25:55.163 nvme3n1: ios=823/0, merge=0/0, ticks=1122678/0, in_queue=1122678, util=97.96% 00:25:55.163 nvme4n1: ios=1843/0, merge=0/0, ticks=1163966/0, in_queue=1163966, util=98.28% 00:25:55.163 nvme5n1: ios=6948/0, merge=0/0, ticks=1207631/0, in_queue=1207631, util=98.42% 00:25:55.163 nvme6n1: ios=10279/0, merge=0/0, ticks=1235158/0, in_queue=1235158, util=98.50% 00:25:55.163 nvme7n1: ios=7724/0, merge=0/0, ticks=1238824/0, in_queue=1238824, util=98.95% 00:25:55.163 nvme8n1: ios=5048/0, merge=0/0, ticks=1213278/0, in_queue=1213278, util=99.12% 00:25:55.163 nvme9n1: ios=4247/0, merge=0/0, ticks=1232052/0, in_queue=1232052, util=99.24% 00:25:55.163 11:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:55.163 [global] 00:25:55.163 thread=1 00:25:55.163 invalidate=1 00:25:55.163 rw=randwrite 00:25:55.163 time_based=1 00:25:55.163 runtime=10 00:25:55.163 ioengine=libaio 00:25:55.163 direct=1 00:25:55.163 bs=262144 00:25:55.163 iodepth=64 00:25:55.163 norandommap=1 00:25:55.163 numjobs=1 00:25:55.163 00:25:55.163 [job0] 00:25:55.163 filename=/dev/nvme0n1 00:25:55.163 [job1] 00:25:55.163 filename=/dev/nvme10n1 00:25:55.163 [job2] 00:25:55.163 filename=/dev/nvme1n1 00:25:55.163 [job3] 00:25:55.163 filename=/dev/nvme2n1 00:25:55.163 [job4] 00:25:55.163 filename=/dev/nvme3n1 00:25:55.163 [job5] 00:25:55.163 filename=/dev/nvme4n1 00:25:55.163 [job6] 00:25:55.163 filename=/dev/nvme5n1 00:25:55.163 [job7] 00:25:55.163 filename=/dev/nvme6n1 00:25:55.163 [job8] 00:25:55.163 filename=/dev/nvme7n1 00:25:55.163 [job9] 00:25:55.163 filename=/dev/nvme8n1 00:25:55.163 [job10] 00:25:55.163 filename=/dev/nvme9n1 00:25:55.163 Could not set queue depth (nvme0n1) 00:25:55.163 Could not set queue depth (nvme10n1) 00:25:55.163 Could not set queue depth (nvme1n1) 00:25:55.163 Could not set queue depth (nvme2n1) 00:25:55.163 Could not set queue depth (nvme3n1) 00:25:55.163 Could not set queue depth (nvme4n1) 00:25:55.163 Could not set queue depth (nvme5n1) 00:25:55.164 Could not set queue depth (nvme6n1) 00:25:55.164 Could not set queue depth (nvme7n1) 00:25:55.164 Could not set queue depth (nvme8n1) 00:25:55.164 Could not set queue depth (nvme9n1) 00:25:55.164 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.164 fio-3.35 00:25:55.164 Starting 11 threads 00:26:05.138 00:26:05.138 job0: (groupid=0, jobs=1): err= 0: pid=3882950: Sat Nov 2 11:38:04 2024 00:26:05.138 write: IOPS=200, BW=50.2MiB/s (52.7MB/s)(515MiB/10246msec); 0 zone resets 00:26:05.138 slat (usec): min=16, max=243371, avg=4259.99, stdev=12050.20 00:26:05.138 clat (usec): min=1449, max=780440, avg=314158.95, stdev=193018.29 00:26:05.138 lat (msec): min=2, max=780, avg=318.42, stdev=195.60 00:26:05.138 clat percentiles (msec): 00:26:05.138 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 88], 20.00th=[ 142], 00:26:05.138 | 30.00th=[ 180], 40.00th=[ 224], 50.00th=[ 271], 60.00th=[ 347], 00:26:05.138 | 70.00th=[ 422], 80.00th=[ 542], 90.00th=[ 600], 95.00th=[ 634], 00:26:05.138 | 99.00th=[ 684], 99.50th=[ 693], 99.90th=[ 743], 99.95th=[ 785], 00:26:05.138 | 99.99th=[ 785] 00:26:05.138 bw ( KiB/s): min=22528, max=154112, per=6.30%, avg=51040.80, stdev=34427.49, samples=20 00:26:05.138 iops : min= 88, max= 602, avg=199.30, stdev=134.54, samples=20 00:26:05.138 lat (msec) : 2=0.05%, 4=0.63%, 10=1.85%, 20=2.09%, 50=3.74% 00:26:05.138 lat (msec) : 100=2.43%, 250=36.59%, 500=28.18%, 750=24.34%, 1000=0.10% 00:26:05.138 cpu : usr=0.48%, sys=0.79%, ctx=878, majf=0, minf=1 00:26:05.138 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:05.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.138 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.138 issued rwts: total=0,2058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.138 job1: (groupid=0, jobs=1): err= 0: pid=3882962: Sat Nov 2 11:38:04 2024 00:26:05.138 write: IOPS=291, BW=72.9MiB/s (76.4MB/s)(735MiB/10087msec); 0 zone resets 00:26:05.138 slat (usec): min=18, max=64627, avg=2191.93, stdev=6701.00 00:26:05.138 clat (usec): min=836, max=719126, avg=217080.62, stdev=155900.16 00:26:05.138 lat (usec): min=873, max=719212, avg=219272.56, stdev=156913.56 00:26:05.138 clat percentiles (msec): 00:26:05.138 | 1.00th=[ 3], 5.00th=[ 21], 10.00th=[ 57], 20.00th=[ 106], 00:26:05.138 | 30.00th=[ 123], 40.00th=[ 155], 50.00th=[ 171], 60.00th=[ 203], 00:26:05.138 | 70.00th=[ 247], 80.00th=[ 321], 90.00th=[ 477], 95.00th=[ 575], 00:26:05.138 | 99.00th=[ 651], 99.50th=[ 667], 99.90th=[ 709], 99.95th=[ 718], 00:26:05.138 | 99.99th=[ 718] 00:26:05.138 bw ( KiB/s): min=22528, max=128000, per=9.09%, avg=73655.80, stdev=33397.63, samples=20 00:26:05.138 iops : min= 88, max= 500, avg=287.70, stdev=130.45, samples=20 00:26:05.138 lat (usec) : 1000=0.03% 00:26:05.138 lat (msec) : 2=0.48%, 4=1.90%, 10=2.11%, 20=0.51%, 50=3.30% 00:26:05.138 lat (msec) : 100=8.57%, 250=54.27%, 500=20.13%, 750=8.70% 00:26:05.138 cpu : usr=0.91%, sys=0.99%, ctx=1552, majf=0, minf=1 00:26:05.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:05.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.138 issued rwts: total=0,2941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.138 job2: (groupid=0, jobs=1): err= 0: pid=3882963: Sat Nov 2 11:38:04 2024 00:26:05.138 write: IOPS=268, BW=67.2MiB/s (70.5MB/s)(689MiB/10251msec); 0 zone resets 00:26:05.138 slat (usec): min=15, max=305784, avg=1964.82, stdev=11024.82 00:26:05.138 clat (usec): min=910, max=942062, avg=235957.77, stdev=230574.14 00:26:05.138 lat (usec): min=951, max=990113, avg=237922.59, stdev=232917.53 00:26:05.138 clat percentiles (msec): 00:26:05.138 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 29], 20.00th=[ 61], 00:26:05.138 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 111], 60.00th=[ 234], 00:26:05.138 | 70.00th=[ 321], 80.00th=[ 414], 90.00th=[ 567], 95.00th=[ 776], 00:26:05.138 | 99.00th=[ 894], 99.50th=[ 919], 99.90th=[ 936], 99.95th=[ 936], 00:26:05.138 | 99.99th=[ 944] 00:26:05.138 bw ( KiB/s): min=12800, max=194560, per=8.51%, avg=68903.90, stdev=49546.78, samples=20 00:26:05.138 iops : min= 50, max= 760, avg=269.10, stdev=193.57, samples=20 00:26:05.138 lat (usec) : 1000=0.04% 00:26:05.138 lat (msec) : 2=0.51%, 4=2.25%, 10=2.03%, 20=2.58%, 50=9.54% 00:26:05.138 lat (msec) : 100=30.99%, 250=14.19%, 500=22.97%, 750=9.25%, 1000=5.66% 00:26:05.138 cpu : usr=0.84%, sys=0.95%, ctx=1960, majf=0, minf=2 00:26:05.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:05.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.138 issued rwts: total=0,2756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.138 job3: (groupid=0, jobs=1): err= 0: pid=3882964: Sat Nov 2 11:38:04 2024 00:26:05.138 write: IOPS=438, BW=110MiB/s (115MB/s)(1104MiB/10075msec); 0 zone resets 00:26:05.138 slat (usec): min=13, max=35373, avg=1090.28, stdev=3593.77 00:26:05.138 clat (usec): min=775, max=911602, avg=144831.50, stdev=130495.06 00:26:05.138 lat (usec): min=805, max=911657, avg=145921.78, stdev=131010.04 00:26:05.138 clat percentiles (usec): 00:26:05.138 | 1.00th=[ 1663], 5.00th=[ 5932], 10.00th=[ 32375], 20.00th=[ 51643], 00:26:05.138 | 30.00th=[ 56361], 40.00th=[ 78119], 50.00th=[ 98042], 60.00th=[127402], 00:26:05.139 | 70.00th=[193987], 80.00th=[231736], 90.00th=[308282], 95.00th=[396362], 00:26:05.139 | 99.00th=[616563], 99.50th=[692061], 99.90th=[868221], 99.95th=[893387], 00:26:05.139 | 99.99th=[910164] 00:26:05.139 bw ( KiB/s): min=46592, max=304542, per=13.76%, avg=111425.45, stdev=64642.37, samples=20 00:26:05.139 iops : min= 182, max= 1189, avg=435.20, stdev=252.43, samples=20 00:26:05.139 lat (usec) : 1000=0.29% 00:26:05.139 lat (msec) : 2=1.25%, 4=1.86%, 10=3.37%, 20=2.06%, 50=8.33% 00:26:05.139 lat (msec) : 100=34.48%, 250=31.90%, 500=13.81%, 750=2.31%, 1000=0.34% 00:26:05.139 cpu : usr=1.51%, sys=1.61%, ctx=2653, majf=0, minf=1 00:26:05.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:05.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.139 issued rwts: total=0,4417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.139 job4: (groupid=0, jobs=1): err= 0: pid=3882965: Sat Nov 2 11:38:04 2024 00:26:05.139 write: IOPS=193, BW=48.4MiB/s (50.7MB/s)(496MiB/10246msec); 0 zone resets 00:26:05.139 slat (usec): min=17, max=119798, avg=2112.05, stdev=8258.69 00:26:05.139 clat (usec): min=1214, max=908322, avg=328405.96, stdev=213083.04 00:26:05.139 lat (usec): min=1243, max=908360, avg=330518.02, stdev=214573.99 00:26:05.139 clat percentiles (msec): 00:26:05.139 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 104], 00:26:05.139 | 30.00th=[ 184], 40.00th=[ 251], 50.00th=[ 309], 60.00th=[ 363], 00:26:05.139 | 70.00th=[ 460], 80.00th=[ 542], 90.00th=[ 625], 95.00th=[ 693], 00:26:05.139 | 99.00th=[ 776], 99.50th=[ 810], 99.90th=[ 894], 99.95th=[ 911], 00:26:05.139 | 99.99th=[ 911] 00:26:05.139 bw ( KiB/s): min=23040, max=92672, per=6.06%, avg=49117.85, stdev=19734.77, samples=20 00:26:05.139 iops : min= 90, max= 362, avg=191.80, stdev=77.12, samples=20 00:26:05.139 lat (msec) : 2=0.15%, 4=0.50%, 10=1.16%, 20=1.06%, 50=7.72% 00:26:05.139 lat (msec) : 100=8.93%, 250=20.22%, 500=35.10%, 750=23.60%, 1000=1.56% 00:26:05.139 cpu : usr=0.54%, sys=0.80%, ctx=1438, majf=0, minf=1 00:26:05.139 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:05.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.139 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.139 issued rwts: total=0,1983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.139 job5: (groupid=0, jobs=1): err= 0: pid=3882966: Sat Nov 2 11:38:04 2024 00:26:05.139 write: IOPS=274, BW=68.5MiB/s (71.8MB/s)(702MiB/10245msec); 0 zone resets 00:26:05.139 slat (usec): min=23, max=446023, avg=2866.71, stdev=14067.13 00:26:05.139 clat (usec): min=1789, max=900116, avg=230095.72, stdev=169322.37 00:26:05.139 lat (usec): min=1940, max=900159, avg=232962.44, stdev=170749.98 00:26:05.139 clat percentiles (msec): 00:26:05.139 | 1.00th=[ 13], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 72], 00:26:05.139 | 30.00th=[ 126], 40.00th=[ 157], 50.00th=[ 190], 60.00th=[ 232], 00:26:05.139 | 70.00th=[ 275], 80.00th=[ 363], 90.00th=[ 481], 95.00th=[ 567], 00:26:05.139 | 99.00th=[ 776], 99.50th=[ 827], 99.90th=[ 869], 99.95th=[ 902], 00:26:05.139 | 99.99th=[ 902] 00:26:05.139 bw ( KiB/s): min=24576, max=178688, per=8.67%, avg=70262.45, stdev=43830.18, samples=20 00:26:05.139 iops : min= 96, max= 698, avg=274.40, stdev=171.25, samples=20 00:26:05.139 lat (msec) : 2=0.07%, 4=0.14%, 10=0.57%, 20=0.82%, 50=3.81% 00:26:05.139 lat (msec) : 100=19.87%, 250=39.96%, 500=26.39%, 750=7.09%, 1000=1.28% 00:26:05.139 cpu : usr=0.83%, sys=1.15%, ctx=1188, majf=0, minf=1 00:26:05.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:05.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.139 issued rwts: total=0,2808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.139 job6: (groupid=0, jobs=1): err= 0: pid=3882967: Sat Nov 2 11:38:04 2024 00:26:05.139 write: IOPS=249, BW=62.3MiB/s (65.3MB/s)(628MiB/10080msec); 0 zone resets 00:26:05.139 slat (usec): min=15, max=295544, avg=2372.71, stdev=9852.10 00:26:05.139 clat (usec): min=997, max=901799, avg=254584.51, stdev=190746.91 00:26:05.139 lat (usec): min=1032, max=901911, avg=256957.22, stdev=192598.25 00:26:05.139 clat percentiles (msec): 00:26:05.139 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 43], 20.00th=[ 90], 00:26:05.139 | 30.00th=[ 125], 40.00th=[ 155], 50.00th=[ 213], 60.00th=[ 264], 00:26:05.139 | 70.00th=[ 321], 80.00th=[ 405], 90.00th=[ 567], 95.00th=[ 651], 00:26:05.139 | 99.00th=[ 735], 99.50th=[ 802], 99.90th=[ 885], 99.95th=[ 894], 00:26:05.139 | 99.99th=[ 902] 00:26:05.139 bw ( KiB/s): min=22528, max=161980, per=7.73%, avg=62620.20, stdev=33059.96, samples=20 00:26:05.139 iops : min= 88, max= 632, avg=244.55, stdev=129.02, samples=20 00:26:05.139 lat (usec) : 1000=0.04% 00:26:05.139 lat (msec) : 2=0.44%, 4=0.68%, 10=1.31%, 20=1.71%, 50=7.49% 00:26:05.139 lat (msec) : 100=10.60%, 250=34.82%, 500=29.72%, 750=12.35%, 1000=0.84% 00:26:05.139 cpu : usr=0.75%, sys=0.87%, ctx=1709, majf=0, minf=1 00:26:05.139 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:05.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.139 issued rwts: total=0,2510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.139 job7: (groupid=0, jobs=1): err= 0: pid=3882968: Sat Nov 2 11:38:04 2024 00:26:05.139 write: IOPS=253, BW=63.4MiB/s (66.4MB/s)(649MiB/10248msec); 0 zone resets 00:26:05.139 slat (usec): min=15, max=82043, avg=2240.85, stdev=8619.02 00:26:05.139 clat (usec): min=869, max=833482, avg=250156.27, stdev=233093.70 00:26:05.139 lat (usec): min=943, max=833536, avg=252397.12, stdev=235602.12 00:26:05.139 clat percentiles (usec): 00:26:05.139 | 1.00th=[ 1057], 5.00th=[ 1893], 10.00th=[ 3523], 20.00th=[ 10683], 00:26:05.139 | 30.00th=[ 58983], 40.00th=[135267], 50.00th=[168821], 60.00th=[250610], 00:26:05.139 | 70.00th=[375391], 80.00th=[513803], 90.00th=[616563], 95.00th=[675283], 00:26:05.139 | 99.00th=[767558], 99.50th=[801113], 99.90th=[817890], 99.95th=[834667], 00:26:05.139 | 99.99th=[834667] 00:26:05.139 bw ( KiB/s): min=24576, max=165888, per=8.01%, avg=64847.20, stdev=42976.67, samples=20 00:26:05.139 iops : min= 96, max= 648, avg=253.20, stdev=167.94, samples=20 00:26:05.139 lat (usec) : 1000=0.62% 00:26:05.139 lat (msec) : 2=4.81%, 4=6.43%, 10=7.32%, 20=5.62%, 50=4.24% 00:26:05.139 lat (msec) : 100=6.16%, 250=24.87%, 500=18.79%, 750=19.64%, 1000=1.50% 00:26:05.139 cpu : usr=0.63%, sys=1.09%, ctx=1929, majf=0, minf=1 00:26:05.139 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:05.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.139 issued rwts: total=0,2597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.139 job8: (groupid=0, jobs=1): err= 0: pid=3882969: Sat Nov 2 11:38:04 2024 00:26:05.139 write: IOPS=284, BW=71.1MiB/s (74.6MB/s)(717MiB/10075msec); 0 zone resets 00:26:05.139 slat (usec): min=14, max=235628, avg=1918.71, stdev=7813.30 00:26:05.139 clat (usec): min=1373, max=982857, avg=222974.75, stdev=172713.99 00:26:05.139 lat (usec): min=1403, max=990782, avg=224893.46, stdev=174053.69 00:26:05.139 clat percentiles (msec): 00:26:05.139 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 73], 00:26:05.139 | 30.00th=[ 134], 40.00th=[ 161], 50.00th=[ 192], 60.00th=[ 236], 00:26:05.139 | 70.00th=[ 266], 80.00th=[ 326], 90.00th=[ 468], 95.00th=[ 527], 00:26:05.139 | 99.00th=[ 894], 99.50th=[ 961], 99.90th=[ 978], 99.95th=[ 986], 00:26:05.139 | 99.99th=[ 986] 00:26:05.139 bw ( KiB/s): min=14848, max=155136, per=8.86%, avg=71742.80, stdev=34083.99, samples=20 00:26:05.139 iops : min= 58, max= 606, avg=280.20, stdev=133.15, samples=20 00:26:05.139 lat (msec) : 2=0.17%, 4=0.73%, 10=3.28%, 20=5.97%, 50=4.33% 00:26:05.139 lat (msec) : 100=11.17%, 250=39.36%, 500=27.77%, 750=5.41%, 1000=1.81% 00:26:05.139 cpu : usr=0.92%, sys=1.15%, ctx=1848, majf=0, minf=1 00:26:05.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:05.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.139 issued rwts: total=0,2866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.139 job9: (groupid=0, jobs=1): err= 0: pid=3882970: Sat Nov 2 11:38:04 2024 00:26:05.139 write: IOPS=314, BW=78.6MiB/s (82.4MB/s)(804MiB/10231msec); 0 zone resets 00:26:05.139 slat (usec): min=21, max=93934, avg=1720.70, stdev=6854.88 00:26:05.139 clat (msec): min=3, max=919, avg=201.72, stdev=169.45 00:26:05.139 lat (msec): min=3, max=919, avg=203.44, stdev=171.11 00:26:05.139 clat percentiles (msec): 00:26:05.139 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 62], 00:26:05.139 | 30.00th=[ 104], 40.00th=[ 136], 50.00th=[ 157], 60.00th=[ 192], 00:26:05.139 | 70.00th=[ 236], 80.00th=[ 296], 90.00th=[ 435], 95.00th=[ 542], 00:26:05.140 | 99.00th=[ 860], 99.50th=[ 902], 99.90th=[ 919], 99.95th=[ 919], 00:26:05.140 | 99.99th=[ 919] 00:26:05.140 bw ( KiB/s): min=16384, max=154624, per=9.96%, avg=80708.30, stdev=40764.69, samples=20 00:26:05.140 iops : min= 64, max= 604, avg=315.20, stdev=159.31, samples=20 00:26:05.140 lat (msec) : 4=0.03%, 10=0.34%, 20=4.60%, 50=10.91%, 100=13.28% 00:26:05.140 lat (msec) : 250=43.59%, 500=19.87%, 750=5.78%, 1000=1.59% 00:26:05.140 cpu : usr=1.06%, sys=1.27%, ctx=2113, majf=0, minf=1 00:26:05.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:05.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.140 issued rwts: total=0,3216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.140 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.140 job10: (groupid=0, jobs=1): err= 0: pid=3882971: Sat Nov 2 11:38:04 2024 00:26:05.140 write: IOPS=424, BW=106MiB/s (111MB/s)(1071MiB/10090msec); 0 zone resets 00:26:05.140 slat (usec): min=19, max=50918, avg=1648.06, stdev=4707.32 00:26:05.140 clat (msec): min=8, max=771, avg=148.98, stdev=129.89 00:26:05.140 lat (msec): min=9, max=771, avg=150.63, stdev=130.85 00:26:05.140 clat percentiles (msec): 00:26:05.140 | 1.00th=[ 26], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:26:05.140 | 30.00th=[ 53], 40.00th=[ 85], 50.00th=[ 113], 60.00th=[ 148], 00:26:05.140 | 70.00th=[ 167], 80.00th=[ 213], 90.00th=[ 296], 95.00th=[ 447], 00:26:05.140 | 99.00th=[ 625], 99.50th=[ 693], 99.90th=[ 743], 99.95th=[ 751], 00:26:05.140 | 99.99th=[ 768] 00:26:05.140 bw ( KiB/s): min=31232, max=361472, per=13.34%, avg=108063.50, stdev=86185.16, samples=20 00:26:05.140 iops : min= 122, max= 1412, avg=422.05, stdev=336.66, samples=20 00:26:05.140 lat (msec) : 10=0.05%, 20=0.58%, 50=28.82%, 100=13.75%, 250=42.10% 00:26:05.140 lat (msec) : 500=11.44%, 750=3.22%, 1000=0.05% 00:26:05.140 cpu : usr=1.35%, sys=1.52%, ctx=1570, majf=0, minf=1 00:26:05.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:05.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:05.140 issued rwts: total=0,4285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.140 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:05.140 00:26:05.140 Run status group 0 (all jobs): 00:26:05.140 WRITE: bw=791MiB/s (829MB/s), 48.4MiB/s-110MiB/s (50.7MB/s-115MB/s), io=8109MiB (8503MB), run=10075-10251msec 00:26:05.140 00:26:05.140 Disk stats (read/write): 00:26:05.140 nvme0n1: ios=51/4049, merge=0/0, ticks=2011/1221749, in_queue=1223760, util=99.81% 00:26:05.140 nvme10n1: ios=45/5699, merge=0/0, ticks=2185/1216028, in_queue=1218213, util=99.90% 00:26:05.140 nvme1n1: ios=0/5447, merge=0/0, ticks=0/1239468, in_queue=1239468, util=97.54% 00:26:05.140 nvme2n1: ios=0/8655, merge=0/0, ticks=0/1222775, in_queue=1222775, util=97.65% 00:26:05.140 nvme3n1: ios=0/3900, merge=0/0, ticks=0/1247122, in_queue=1247122, util=97.73% 00:26:05.140 nvme4n1: ios=46/5558, merge=0/0, ticks=5401/1141255, in_queue=1146656, util=99.95% 00:26:05.140 nvme5n1: ios=0/4670, merge=0/0, ticks=0/1217368, in_queue=1217368, util=98.19% 00:26:05.140 nvme6n1: ios=0/5133, merge=0/0, ticks=0/1236973, in_queue=1236973, util=98.41% 00:26:05.140 nvme7n1: ios=0/5553, merge=0/0, ticks=0/1219648, in_queue=1219648, util=98.79% 00:26:05.140 nvme8n1: ios=42/6381, merge=0/0, ticks=2011/1235999, in_queue=1238010, util=99.93% 00:26:05.140 nvme9n1: ios=33/8334, merge=0/0, ticks=1188/1217131, in_queue=1218319, util=100.00% 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:05.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.140 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:05.140 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:05.140 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.140 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:05.399 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.399 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:05.658 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:05.658 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:05.658 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:05.659 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.659 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.659 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.659 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:05.917 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.918 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:06.177 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:06.177 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:06.177 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:06.436 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:06.436 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.436 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.697 rmmod nvme_tcp 00:26:06.697 rmmod nvme_fabrics 00:26:06.697 rmmod nvme_keyring 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3877939 ']' 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3877939 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 3877939 ']' 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 3877939 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3877939 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:06.697 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3877939' 00:26:06.698 killing process with pid 3877939 00:26:06.698 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 3877939 00:26:06.698 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 3877939 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.266 11:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.238 00:26:09.238 real 1m0.684s 00:26:09.238 user 3m29.328s 00:26:09.238 sys 0m16.966s 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.238 ************************************ 00:26:09.238 END TEST nvmf_multiconnection 00:26:09.238 ************************************ 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:09.238 ************************************ 00:26:09.238 START TEST nvmf_initiator_timeout 00:26:09.238 ************************************ 00:26:09.238 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:09.238 * Looking for test storage... 00:26:09.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.239 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.496 --rc genhtml_branch_coverage=1 00:26:09.496 --rc genhtml_function_coverage=1 00:26:09.496 --rc genhtml_legend=1 00:26:09.496 --rc geninfo_all_blocks=1 00:26:09.496 --rc geninfo_unexecuted_blocks=1 00:26:09.496 00:26:09.496 ' 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.496 --rc genhtml_branch_coverage=1 00:26:09.496 --rc genhtml_function_coverage=1 00:26:09.496 --rc genhtml_legend=1 00:26:09.496 --rc geninfo_all_blocks=1 00:26:09.496 --rc geninfo_unexecuted_blocks=1 00:26:09.496 00:26:09.496 ' 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.496 --rc genhtml_branch_coverage=1 00:26:09.496 --rc genhtml_function_coverage=1 00:26:09.496 --rc genhtml_legend=1 00:26:09.496 --rc geninfo_all_blocks=1 00:26:09.496 --rc geninfo_unexecuted_blocks=1 00:26:09.496 00:26:09.496 ' 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:09.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.496 --rc genhtml_branch_coverage=1 00:26:09.496 --rc genhtml_function_coverage=1 00:26:09.496 --rc genhtml_legend=1 00:26:09.496 --rc geninfo_all_blocks=1 00:26:09.496 --rc geninfo_unexecuted_blocks=1 00:26:09.496 00:26:09.496 ' 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.496 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.497 11:38:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.402 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:11.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:11.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:11.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:11.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:11.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:26:11.403 00:26:11.403 --- 10.0.0.2 ping statistics --- 00:26:11.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.403 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:11.403 00:26:11.403 --- 10.0.0.1 ping statistics --- 00:26:11.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.403 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.403 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3886009 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3886009 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 3886009 ']' 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:11.662 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.662 [2024-11-02 11:38:11.868073] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:26:11.662 [2024-11-02 11:38:11.868162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.662 [2024-11-02 11:38:11.949697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.662 [2024-11-02 11:38:11.999204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.662 [2024-11-02 11:38:11.999269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.662 [2024-11-02 11:38:11.999287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.662 [2024-11-02 11:38:11.999301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.662 [2024-11-02 11:38:11.999313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.662 [2024-11-02 11:38:12.000956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.662 [2024-11-02 11:38:12.001025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.662 [2024-11-02 11:38:12.001116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.662 [2024-11-02 11:38:12.001119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.920 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.920 Malloc0 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.921 Delay0 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.921 [2024-11-02 11:38:12.191898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:11.921 [2024-11-02 11:38:12.220161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.921 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:12.854 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:12.854 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:26:12.854 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.854 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:12.854 11:38:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3886433 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:14.752 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:14.752 [global] 00:26:14.752 thread=1 00:26:14.752 invalidate=1 00:26:14.752 rw=write 00:26:14.752 time_based=1 00:26:14.752 runtime=60 00:26:14.752 ioengine=libaio 00:26:14.752 direct=1 00:26:14.752 bs=4096 00:26:14.752 iodepth=1 00:26:14.752 norandommap=0 00:26:14.752 numjobs=1 00:26:14.752 00:26:14.752 verify_dump=1 00:26:14.752 verify_backlog=512 00:26:14.752 verify_state_save=0 00:26:14.752 do_verify=1 00:26:14.752 verify=crc32c-intel 00:26:14.752 [job0] 00:26:14.752 filename=/dev/nvme0n1 00:26:14.752 Could not set queue depth (nvme0n1) 00:26:14.752 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:14.752 fio-3.35 00:26:14.752 Starting 1 thread 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.042 true 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.042 true 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.042 true 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:18.042 true 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.042 11:38:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.582 true 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.582 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.839 true 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.839 true 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:20.839 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.840 11:38:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.840 true 00:26:20.840 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.840 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:20.840 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3886433 00:27:17.047 00:27:17.047 job0: (groupid=0, jobs=1): err= 0: pid=3886502: Sat Nov 2 11:39:15 2024 00:27:17.047 read: IOPS=53, BW=214KiB/s (219kB/s)(12.5MiB/60040msec) 00:27:17.047 slat (usec): min=4, max=5857, avg=16.24, stdev=103.43 00:27:17.047 clat (usec): min=273, max=40972k, avg=18359.58, stdev=723304.18 00:27:17.047 lat (usec): min=282, max=40972k, avg=18375.83, stdev=723304.22 00:27:17.047 clat percentiles (usec): 00:27:17.047 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 310], 00:27:17.047 | 20.00th=[ 330], 30.00th=[ 351], 40.00th=[ 363], 00:27:17.047 | 50.00th=[ 379], 60.00th=[ 392], 70.00th=[ 408], 00:27:17.047 | 80.00th=[ 441], 90.00th=[ 41157], 95.00th=[ 41681], 00:27:17.047 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:27:17.047 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:17.047 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60040msec); 0 zone resets 00:27:17.047 slat (nsec): min=6608, max=78306, avg=15539.55, stdev=8958.64 00:27:17.047 clat (usec): min=194, max=4139, avg=275.23, stdev=102.64 00:27:17.047 lat (usec): min=203, max=4156, avg=290.77, stdev=104.95 00:27:17.047 clat percentiles (usec): 00:27:17.047 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 243], 00:27:17.047 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:27:17.047 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 371], 00:27:17.047 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 1270], 99.95th=[ 3261], 00:27:17.047 | 99.99th=[ 4146] 00:27:17.047 bw ( KiB/s): min= 2088, max= 8192, per=100.00%, avg=4778.67, stdev=2099.77, samples=6 00:27:17.047 iops : min= 522, max= 2048, avg=1194.67, stdev=524.94, samples=6 00:27:17.047 lat (usec) : 250=16.80%, 500=76.17%, 750=0.93%, 1000=0.01% 00:27:17.047 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01%, 50=6.01%, >=2000=0.01% 00:27:17.047 cpu : usr=0.12%, sys=0.22%, ctx=6795, majf=0, minf=1 00:27:17.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:17.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.047 issued rwts: total=3209,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:17.047 00:27:17.047 Run status group 0 (all jobs): 00:27:17.047 READ: bw=214KiB/s (219kB/s), 214KiB/s-214KiB/s (219kB/s-219kB/s), io=12.5MiB (13.1MB), run=60040-60040msec 00:27:17.047 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60040-60040msec 00:27:17.047 00:27:17.047 Disk stats (read/write): 00:27:17.047 nvme0n1: ios=3304/3584, merge=0/0, ticks=19016/964, in_queue=19980, util=99.91% 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:17.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:17.047 nvmf hotplug test: fio successful as expected 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:17.047 rmmod nvme_tcp 00:27:17.047 rmmod nvme_fabrics 00:27:17.047 rmmod nvme_keyring 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3886009 ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3886009 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 3886009 ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 3886009 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3886009 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3886009' 00:27:17.047 killing process with pid 3886009 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 3886009 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 3886009 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.047 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.614 00:27:17.614 real 1m8.303s 00:27:17.614 user 4m11.488s 00:27:17.614 sys 0m6.280s 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.614 ************************************ 00:27:17.614 END TEST nvmf_initiator_timeout 00:27:17.614 ************************************ 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.614 11:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:20.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:20.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:20.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:20.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:20.146 ************************************ 00:27:20.146 START TEST nvmf_perf_adq 00:27:20.146 ************************************ 00:27:20.146 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:20.146 * Looking for test storage... 00:27:20.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:20.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.146 --rc genhtml_branch_coverage=1 00:27:20.146 --rc genhtml_function_coverage=1 00:27:20.146 --rc genhtml_legend=1 00:27:20.146 --rc geninfo_all_blocks=1 00:27:20.146 --rc geninfo_unexecuted_blocks=1 00:27:20.146 00:27:20.146 ' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:20.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.146 --rc genhtml_branch_coverage=1 00:27:20.146 --rc genhtml_function_coverage=1 00:27:20.146 --rc genhtml_legend=1 00:27:20.146 --rc geninfo_all_blocks=1 00:27:20.146 --rc geninfo_unexecuted_blocks=1 00:27:20.146 00:27:20.146 ' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:20.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.146 --rc genhtml_branch_coverage=1 00:27:20.146 --rc genhtml_function_coverage=1 00:27:20.146 --rc genhtml_legend=1 00:27:20.146 --rc geninfo_all_blocks=1 00:27:20.146 --rc geninfo_unexecuted_blocks=1 00:27:20.146 00:27:20.146 ' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:20.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.146 --rc genhtml_branch_coverage=1 00:27:20.146 --rc genhtml_function_coverage=1 00:27:20.146 --rc genhtml_legend=1 00:27:20.146 --rc geninfo_all_blocks=1 00:27:20.146 --rc geninfo_unexecuted_blocks=1 00:27:20.146 00:27:20.146 ' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.146 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:22.048 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:22.049 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:22.049 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:22.049 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:22.306 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:24.836 11:39:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.108 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.108 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:30.109 00:27:30.109 --- 10.0.0.2 ping statistics --- 00:27:30.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.109 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:27:30.109 00:27:30.109 --- 10.0.0.1 ping statistics --- 00:27:30.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.109 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3898753 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3898753 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3898753 ']' 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.109 [2024-11-02 11:39:30.238445] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:27:30.109 [2024-11-02 11:39:30.238519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.109 [2024-11-02 11:39:30.313352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.109 [2024-11-02 11:39:30.362396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.109 [2024-11-02 11:39:30.362454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.109 [2024-11-02 11:39:30.362473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.109 [2024-11-02 11:39:30.362487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.109 [2024-11-02 11:39:30.362498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.109 [2024-11-02 11:39:30.364180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.109 [2024-11-02 11:39:30.364268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.109 [2024-11-02 11:39:30.364336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.109 [2024-11-02 11:39:30.364338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.109 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 [2024-11-02 11:39:30.656037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 Malloc1 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.368 [2024-11-02 11:39:30.722692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3898791 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:30.368 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:32.897 "tick_rate": 2700000000, 00:27:32.897 "poll_groups": [ 00:27:32.897 { 00:27:32.897 "name": "nvmf_tgt_poll_group_000", 00:27:32.897 "admin_qpairs": 1, 00:27:32.897 "io_qpairs": 1, 00:27:32.897 "current_admin_qpairs": 1, 00:27:32.897 "current_io_qpairs": 1, 00:27:32.897 "pending_bdev_io": 0, 00:27:32.897 "completed_nvme_io": 18766, 00:27:32.897 "transports": [ 00:27:32.897 { 00:27:32.897 "trtype": "TCP" 00:27:32.897 } 00:27:32.897 ] 00:27:32.897 }, 00:27:32.897 { 00:27:32.897 "name": "nvmf_tgt_poll_group_001", 00:27:32.897 "admin_qpairs": 0, 00:27:32.897 "io_qpairs": 1, 00:27:32.897 "current_admin_qpairs": 0, 00:27:32.897 "current_io_qpairs": 1, 00:27:32.897 "pending_bdev_io": 0, 00:27:32.897 "completed_nvme_io": 19435, 00:27:32.897 "transports": [ 00:27:32.897 { 00:27:32.897 "trtype": "TCP" 00:27:32.897 } 00:27:32.897 ] 00:27:32.897 }, 00:27:32.897 { 00:27:32.897 "name": "nvmf_tgt_poll_group_002", 00:27:32.897 "admin_qpairs": 0, 00:27:32.897 "io_qpairs": 1, 00:27:32.897 "current_admin_qpairs": 0, 00:27:32.897 "current_io_qpairs": 1, 00:27:32.897 "pending_bdev_io": 0, 00:27:32.897 "completed_nvme_io": 18932, 00:27:32.897 "transports": [ 00:27:32.897 { 00:27:32.897 "trtype": "TCP" 00:27:32.897 } 00:27:32.897 ] 00:27:32.897 }, 00:27:32.897 { 00:27:32.897 "name": "nvmf_tgt_poll_group_003", 00:27:32.897 "admin_qpairs": 0, 00:27:32.897 "io_qpairs": 1, 00:27:32.897 "current_admin_qpairs": 0, 00:27:32.897 "current_io_qpairs": 1, 00:27:32.897 "pending_bdev_io": 0, 00:27:32.897 "completed_nvme_io": 18458, 00:27:32.897 "transports": [ 00:27:32.897 { 00:27:32.897 "trtype": "TCP" 00:27:32.897 } 00:27:32.897 ] 00:27:32.897 } 00:27:32.897 ] 00:27:32.897 }' 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:32.897 11:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3898791 00:27:41.036 Initializing NVMe Controllers 00:27:41.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:41.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:41.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:41.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:41.037 Initialization complete. Launching workers. 00:27:41.037 ======================================================== 00:27:41.037 Latency(us) 00:27:41.037 Device Information : IOPS MiB/s Average min max 00:27:41.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10271.50 40.12 6232.27 2081.01 10660.75 00:27:41.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10706.10 41.82 5977.93 2618.67 8468.37 00:27:41.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10459.40 40.86 6118.86 3242.73 8736.52 00:27:41.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10355.70 40.45 6181.79 2010.36 9838.45 00:27:41.037 ======================================================== 00:27:41.037 Total : 41792.69 163.25 6126.23 2010.36 10660.75 00:27:41.037 00:27:41.037 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:41.037 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.038 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:41.038 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.038 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:41.038 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.038 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.038 rmmod nvme_tcp 00:27:41.038 rmmod nvme_fabrics 00:27:41.038 rmmod nvme_keyring 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3898753 ']' 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3898753 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3898753 ']' 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3898753 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3898753 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3898753' 00:27:41.038 killing process with pid 3898753 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3898753 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3898753 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.038 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.039 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.954 11:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.954 11:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:42.954 11:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:42.954 11:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:43.896 11:39:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:46.445 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.753 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:27:51.754 00:27:51.754 --- 10.0.0.2 ping statistics --- 00:27:51.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.754 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:27:51.754 00:27:51.754 --- 10.0.0.1 ping statistics --- 00:27:51.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.754 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:51.754 net.core.busy_poll = 1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:51.754 net.core.busy_read = 1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3901413 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3901413 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3901413 ']' 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:51.754 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 [2024-11-02 11:39:51.700208] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:27:51.755 [2024-11-02 11:39:51.700338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.755 [2024-11-02 11:39:51.774199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.755 [2024-11-02 11:39:51.821301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.755 [2024-11-02 11:39:51.821375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.755 [2024-11-02 11:39:51.821390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.755 [2024-11-02 11:39:51.821401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.755 [2024-11-02 11:39:51.821410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.755 [2024-11-02 11:39:51.822907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.755 [2024-11-02 11:39:51.822974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.755 [2024-11-02 11:39:51.823038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.755 [2024-11-02 11:39:51.823041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 11:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 [2024-11-02 11:39:52.113807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.755 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.014 Malloc1 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.014 [2024-11-02 11:39:52.183636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3901557 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:52.014 11:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:53.917 "tick_rate": 2700000000, 00:27:53.917 "poll_groups": [ 00:27:53.917 { 00:27:53.917 "name": "nvmf_tgt_poll_group_000", 00:27:53.917 "admin_qpairs": 1, 00:27:53.917 "io_qpairs": 1, 00:27:53.917 "current_admin_qpairs": 1, 00:27:53.917 "current_io_qpairs": 1, 00:27:53.917 "pending_bdev_io": 0, 00:27:53.917 "completed_nvme_io": 24588, 00:27:53.917 "transports": [ 00:27:53.917 { 00:27:53.917 "trtype": "TCP" 00:27:53.917 } 00:27:53.917 ] 00:27:53.917 }, 00:27:53.917 { 00:27:53.917 "name": "nvmf_tgt_poll_group_001", 00:27:53.917 "admin_qpairs": 0, 00:27:53.917 "io_qpairs": 3, 00:27:53.917 "current_admin_qpairs": 0, 00:27:53.917 "current_io_qpairs": 3, 00:27:53.917 "pending_bdev_io": 0, 00:27:53.917 "completed_nvme_io": 24550, 00:27:53.917 "transports": [ 00:27:53.917 { 00:27:53.917 "trtype": "TCP" 00:27:53.917 } 00:27:53.917 ] 00:27:53.917 }, 00:27:53.917 { 00:27:53.917 "name": "nvmf_tgt_poll_group_002", 00:27:53.917 "admin_qpairs": 0, 00:27:53.917 "io_qpairs": 0, 00:27:53.917 "current_admin_qpairs": 0, 00:27:53.917 "current_io_qpairs": 0, 00:27:53.917 "pending_bdev_io": 0, 00:27:53.917 "completed_nvme_io": 0, 00:27:53.917 "transports": [ 00:27:53.917 { 00:27:53.917 "trtype": "TCP" 00:27:53.917 } 00:27:53.917 ] 00:27:53.917 }, 00:27:53.917 { 00:27:53.917 "name": "nvmf_tgt_poll_group_003", 00:27:53.917 "admin_qpairs": 0, 00:27:53.917 "io_qpairs": 0, 00:27:53.917 "current_admin_qpairs": 0, 00:27:53.917 "current_io_qpairs": 0, 00:27:53.917 "pending_bdev_io": 0, 00:27:53.917 "completed_nvme_io": 0, 00:27:53.917 "transports": [ 00:27:53.917 { 00:27:53.917 "trtype": "TCP" 00:27:53.917 } 00:27:53.917 ] 00:27:53.917 } 00:27:53.917 ] 00:27:53.917 }' 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:53.917 11:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3901557 00:28:02.037 Initializing NVMe Controllers 00:28:02.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:02.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:02.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:02.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:02.037 Initialization complete. Launching workers. 00:28:02.037 ======================================================== 00:28:02.037 Latency(us) 00:28:02.037 Device Information : IOPS MiB/s Average min max 00:28:02.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4448.90 17.38 14393.00 2006.70 63196.64 00:28:02.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12891.00 50.36 4964.10 1212.81 48266.71 00:28:02.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4080.80 15.94 15689.14 3148.94 63683.79 00:28:02.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4368.50 17.06 14658.54 1872.25 63343.48 00:28:02.037 ======================================================== 00:28:02.037 Total : 25789.20 100.74 9929.94 1212.81 63683.79 00:28:02.037 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.037 rmmod nvme_tcp 00:28:02.037 rmmod nvme_fabrics 00:28:02.037 rmmod nvme_keyring 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3901413 ']' 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3901413 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3901413 ']' 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3901413 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3901413 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3901413' 00:28:02.037 killing process with pid 3901413 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3901413 00:28:02.037 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3901413 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.296 11:40:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:05.589 00:28:05.589 real 0m45.717s 00:28:05.589 user 2m34.479s 00:28:05.589 sys 0m11.775s 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:05.589 ************************************ 00:28:05.589 END TEST nvmf_perf_adq 00:28:05.589 ************************************ 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:05.589 ************************************ 00:28:05.589 START TEST nvmf_shutdown 00:28:05.589 ************************************ 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:05.589 * Looking for test storage... 00:28:05.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:05.589 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.590 --rc genhtml_branch_coverage=1 00:28:05.590 --rc genhtml_function_coverage=1 00:28:05.590 --rc genhtml_legend=1 00:28:05.590 --rc geninfo_all_blocks=1 00:28:05.590 --rc geninfo_unexecuted_blocks=1 00:28:05.590 00:28:05.590 ' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.590 --rc genhtml_branch_coverage=1 00:28:05.590 --rc genhtml_function_coverage=1 00:28:05.590 --rc genhtml_legend=1 00:28:05.590 --rc geninfo_all_blocks=1 00:28:05.590 --rc geninfo_unexecuted_blocks=1 00:28:05.590 00:28:05.590 ' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.590 --rc genhtml_branch_coverage=1 00:28:05.590 --rc genhtml_function_coverage=1 00:28:05.590 --rc genhtml_legend=1 00:28:05.590 --rc geninfo_all_blocks=1 00:28:05.590 --rc geninfo_unexecuted_blocks=1 00:28:05.590 00:28:05.590 ' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:05.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.590 --rc genhtml_branch_coverage=1 00:28:05.590 --rc genhtml_function_coverage=1 00:28:05.590 --rc genhtml_legend=1 00:28:05.590 --rc geninfo_all_blocks=1 00:28:05.590 --rc geninfo_unexecuted_blocks=1 00:28:05.590 00:28:05.590 ' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:05.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:05.590 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:05.591 ************************************ 00:28:05.591 START TEST nvmf_shutdown_tc1 00:28:05.591 ************************************ 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:05.591 11:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.498 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:07.499 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:07.499 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:07.499 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:07.499 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.499 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.758 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.758 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.758 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.758 11:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:28:07.758 00:28:07.758 --- 10.0.0.2 ping statistics --- 00:28:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.758 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:28:07.758 00:28:07.758 --- 10.0.0.1 ping statistics --- 00:28:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.758 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3904854 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3904854 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3904854 ']' 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:07.758 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:07.758 [2024-11-02 11:40:08.113289] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:07.758 [2024-11-02 11:40:08.113363] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.017 [2024-11-02 11:40:08.190865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.017 [2024-11-02 11:40:08.239125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.017 [2024-11-02 11:40:08.239178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.017 [2024-11-02 11:40:08.239207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.017 [2024-11-02 11:40:08.239218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.017 [2024-11-02 11:40:08.239228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.017 [2024-11-02 11:40:08.240784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.017 [2024-11-02 11:40:08.240843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.017 [2024-11-02 11:40:08.240908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.017 [2024-11-02 11:40:08.240911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.017 [2024-11-02 11:40:08.376666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.017 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.278 Malloc1 00:28:08.278 [2024-11-02 11:40:08.466160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.278 Malloc2 00:28:08.278 Malloc3 00:28:08.278 Malloc4 00:28:08.278 Malloc5 00:28:08.537 Malloc6 00:28:08.537 Malloc7 00:28:08.537 Malloc8 00:28:08.537 Malloc9 00:28:08.537 Malloc10 00:28:08.537 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.537 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:08.537 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.537 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3904960 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3904960 /var/tmp/bdevperf.sock 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3904960 ']' 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.797 { 00:28:08.797 "params": { 00:28:08.797 "name": "Nvme$subsystem", 00:28:08.797 "trtype": "$TEST_TRANSPORT", 00:28:08.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.797 "adrfam": "ipv4", 00:28:08.797 "trsvcid": "$NVMF_PORT", 00:28:08.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.797 "hdgst": ${hdgst:-false}, 00:28:08.797 "ddgst": ${ddgst:-false} 00:28:08.797 }, 00:28:08.797 "method": "bdev_nvme_attach_controller" 00:28:08.797 } 00:28:08.797 EOF 00:28:08.797 )") 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.797 { 00:28:08.797 "params": { 00:28:08.797 "name": "Nvme$subsystem", 00:28:08.797 "trtype": "$TEST_TRANSPORT", 00:28:08.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.797 "adrfam": "ipv4", 00:28:08.797 "trsvcid": "$NVMF_PORT", 00:28:08.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.797 "hdgst": ${hdgst:-false}, 00:28:08.797 "ddgst": ${ddgst:-false} 00:28:08.797 }, 00:28:08.797 "method": "bdev_nvme_attach_controller" 00:28:08.797 } 00:28:08.797 EOF 00:28:08.797 )") 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.797 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.798 { 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme$subsystem", 00:28:08.798 "trtype": "$TEST_TRANSPORT", 00:28:08.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "$NVMF_PORT", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.798 "hdgst": ${hdgst:-false}, 00:28:08.798 "ddgst": ${ddgst:-false} 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 } 00:28:08.798 EOF 00:28:08.798 )") 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:08.798 11:40:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme1", 00:28:08.798 "trtype": "tcp", 00:28:08.798 "traddr": "10.0.0.2", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "4420", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.798 "hdgst": false, 00:28:08.798 "ddgst": false 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 },{ 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme2", 00:28:08.798 "trtype": "tcp", 00:28:08.798 "traddr": "10.0.0.2", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "4420", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:08.798 "hdgst": false, 00:28:08.798 "ddgst": false 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 },{ 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme3", 00:28:08.798 "trtype": "tcp", 00:28:08.798 "traddr": "10.0.0.2", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "4420", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:08.798 "hdgst": false, 00:28:08.798 "ddgst": false 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 },{ 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme4", 00:28:08.798 "trtype": "tcp", 00:28:08.798 "traddr": "10.0.0.2", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "4420", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:08.798 "hdgst": false, 00:28:08.798 "ddgst": false 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 },{ 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme5", 00:28:08.798 "trtype": "tcp", 00:28:08.798 "traddr": "10.0.0.2", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "4420", 00:28:08.798 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:08.798 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:08.798 "hdgst": false, 00:28:08.798 "ddgst": false 00:28:08.798 }, 00:28:08.798 "method": "bdev_nvme_attach_controller" 00:28:08.798 },{ 00:28:08.798 "params": { 00:28:08.798 "name": "Nvme6", 00:28:08.798 "trtype": "tcp", 00:28:08.798 "traddr": "10.0.0.2", 00:28:08.798 "adrfam": "ipv4", 00:28:08.798 "trsvcid": "4420", 00:28:08.799 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:08.799 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:08.799 "hdgst": false, 00:28:08.799 "ddgst": false 00:28:08.799 }, 00:28:08.799 "method": "bdev_nvme_attach_controller" 00:28:08.799 },{ 00:28:08.799 "params": { 00:28:08.799 "name": "Nvme7", 00:28:08.799 "trtype": "tcp", 00:28:08.799 "traddr": "10.0.0.2", 00:28:08.799 "adrfam": "ipv4", 00:28:08.799 "trsvcid": "4420", 00:28:08.799 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:08.799 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:08.799 "hdgst": false, 00:28:08.799 "ddgst": false 00:28:08.799 }, 00:28:08.799 "method": "bdev_nvme_attach_controller" 00:28:08.799 },{ 00:28:08.799 "params": { 00:28:08.799 "name": "Nvme8", 00:28:08.799 "trtype": "tcp", 00:28:08.799 "traddr": "10.0.0.2", 00:28:08.799 "adrfam": "ipv4", 00:28:08.799 "trsvcid": "4420", 00:28:08.799 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:08.799 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:08.799 "hdgst": false, 00:28:08.799 "ddgst": false 00:28:08.799 }, 00:28:08.799 "method": "bdev_nvme_attach_controller" 00:28:08.799 },{ 00:28:08.799 "params": { 00:28:08.799 "name": "Nvme9", 00:28:08.799 "trtype": "tcp", 00:28:08.799 "traddr": "10.0.0.2", 00:28:08.799 "adrfam": "ipv4", 00:28:08.799 "trsvcid": "4420", 00:28:08.799 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:08.799 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:08.799 "hdgst": false, 00:28:08.799 "ddgst": false 00:28:08.799 }, 00:28:08.799 "method": "bdev_nvme_attach_controller" 00:28:08.799 },{ 00:28:08.799 "params": { 00:28:08.799 "name": "Nvme10", 00:28:08.799 "trtype": "tcp", 00:28:08.799 "traddr": "10.0.0.2", 00:28:08.799 "adrfam": "ipv4", 00:28:08.799 "trsvcid": "4420", 00:28:08.799 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:08.799 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:08.799 "hdgst": false, 00:28:08.799 "ddgst": false 00:28:08.799 }, 00:28:08.799 "method": "bdev_nvme_attach_controller" 00:28:08.799 }' 00:28:08.799 [2024-11-02 11:40:08.987078] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:08.799 [2024-11-02 11:40:08.987169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:08.799 [2024-11-02 11:40:09.061451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.799 [2024-11-02 11:40:09.108780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3904960 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:10.725 11:40:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:11.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3904960 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:11.663 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3904854 00:28:11.663 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:11.663 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.664 { 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme$subsystem", 00:28:11.664 "trtype": "$TEST_TRANSPORT", 00:28:11.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "$NVMF_PORT", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.664 "hdgst": ${hdgst:-false}, 00:28:11.664 "ddgst": ${ddgst:-false} 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 } 00:28:11.664 EOF 00:28:11.664 )") 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.664 { 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme$subsystem", 00:28:11.664 "trtype": "$TEST_TRANSPORT", 00:28:11.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "$NVMF_PORT", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.664 "hdgst": ${hdgst:-false}, 00:28:11.664 "ddgst": ${ddgst:-false} 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 } 00:28:11.664 EOF 00:28:11.664 )") 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.664 { 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme$subsystem", 00:28:11.664 "trtype": "$TEST_TRANSPORT", 00:28:11.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "$NVMF_PORT", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.664 "hdgst": ${hdgst:-false}, 00:28:11.664 "ddgst": ${ddgst:-false} 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 } 00:28:11.664 EOF 00:28:11.664 )") 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.664 { 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme$subsystem", 00:28:11.664 "trtype": "$TEST_TRANSPORT", 00:28:11.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "$NVMF_PORT", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.664 "hdgst": ${hdgst:-false}, 00:28:11.664 "ddgst": ${ddgst:-false} 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 } 00:28:11.664 EOF 00:28:11.664 )") 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.664 { 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme$subsystem", 00:28:11.664 "trtype": "$TEST_TRANSPORT", 00:28:11.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "$NVMF_PORT", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.664 "hdgst": ${hdgst:-false}, 00:28:11.664 "ddgst": ${ddgst:-false} 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 } 00:28:11.664 EOF 00:28:11.664 )") 00:28:11.664 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.923 { 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme$subsystem", 00:28:11.923 "trtype": "$TEST_TRANSPORT", 00:28:11.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "$NVMF_PORT", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.923 "hdgst": ${hdgst:-false}, 00:28:11.923 "ddgst": ${ddgst:-false} 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 } 00:28:11.923 EOF 00:28:11.923 )") 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.923 { 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme$subsystem", 00:28:11.923 "trtype": "$TEST_TRANSPORT", 00:28:11.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "$NVMF_PORT", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.923 "hdgst": ${hdgst:-false}, 00:28:11.923 "ddgst": ${ddgst:-false} 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 } 00:28:11.923 EOF 00:28:11.923 )") 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.923 { 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme$subsystem", 00:28:11.923 "trtype": "$TEST_TRANSPORT", 00:28:11.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "$NVMF_PORT", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.923 "hdgst": ${hdgst:-false}, 00:28:11.923 "ddgst": ${ddgst:-false} 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 } 00:28:11.923 EOF 00:28:11.923 )") 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.923 { 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme$subsystem", 00:28:11.923 "trtype": "$TEST_TRANSPORT", 00:28:11.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "$NVMF_PORT", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.923 "hdgst": ${hdgst:-false}, 00:28:11.923 "ddgst": ${ddgst:-false} 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 } 00:28:11.923 EOF 00:28:11.923 )") 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.923 { 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme$subsystem", 00:28:11.923 "trtype": "$TEST_TRANSPORT", 00:28:11.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "$NVMF_PORT", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.923 "hdgst": ${hdgst:-false}, 00:28:11.923 "ddgst": ${ddgst:-false} 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 } 00:28:11.923 EOF 00:28:11.923 )") 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:11.923 11:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme1", 00:28:11.923 "trtype": "tcp", 00:28:11.923 "traddr": "10.0.0.2", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "4420", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.923 "hdgst": false, 00:28:11.923 "ddgst": false 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 },{ 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme2", 00:28:11.923 "trtype": "tcp", 00:28:11.923 "traddr": "10.0.0.2", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "4420", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:11.923 "hdgst": false, 00:28:11.923 "ddgst": false 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 },{ 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme3", 00:28:11.923 "trtype": "tcp", 00:28:11.923 "traddr": "10.0.0.2", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "4420", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:11.923 "hdgst": false, 00:28:11.923 "ddgst": false 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 },{ 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme4", 00:28:11.923 "trtype": "tcp", 00:28:11.923 "traddr": "10.0.0.2", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "4420", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:11.923 "hdgst": false, 00:28:11.923 "ddgst": false 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 },{ 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme5", 00:28:11.923 "trtype": "tcp", 00:28:11.923 "traddr": "10.0.0.2", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "4420", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:11.923 "hdgst": false, 00:28:11.923 "ddgst": false 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 },{ 00:28:11.923 "params": { 00:28:11.923 "name": "Nvme6", 00:28:11.923 "trtype": "tcp", 00:28:11.923 "traddr": "10.0.0.2", 00:28:11.923 "adrfam": "ipv4", 00:28:11.923 "trsvcid": "4420", 00:28:11.923 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:11.923 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:11.923 "hdgst": false, 00:28:11.923 "ddgst": false 00:28:11.923 }, 00:28:11.923 "method": "bdev_nvme_attach_controller" 00:28:11.923 },{ 00:28:11.924 "params": { 00:28:11.924 "name": "Nvme7", 00:28:11.924 "trtype": "tcp", 00:28:11.924 "traddr": "10.0.0.2", 00:28:11.924 "adrfam": "ipv4", 00:28:11.924 "trsvcid": "4420", 00:28:11.924 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:11.924 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:11.924 "hdgst": false, 00:28:11.924 "ddgst": false 00:28:11.924 }, 00:28:11.924 "method": "bdev_nvme_attach_controller" 00:28:11.924 },{ 00:28:11.924 "params": { 00:28:11.924 "name": "Nvme8", 00:28:11.924 "trtype": "tcp", 00:28:11.924 "traddr": "10.0.0.2", 00:28:11.924 "adrfam": "ipv4", 00:28:11.924 "trsvcid": "4420", 00:28:11.924 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:11.924 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:11.924 "hdgst": false, 00:28:11.924 "ddgst": false 00:28:11.924 }, 00:28:11.924 "method": "bdev_nvme_attach_controller" 00:28:11.924 },{ 00:28:11.924 "params": { 00:28:11.924 "name": "Nvme9", 00:28:11.924 "trtype": "tcp", 00:28:11.924 "traddr": "10.0.0.2", 00:28:11.924 "adrfam": "ipv4", 00:28:11.924 "trsvcid": "4420", 00:28:11.924 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:11.924 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:11.924 "hdgst": false, 00:28:11.924 "ddgst": false 00:28:11.924 }, 00:28:11.924 "method": "bdev_nvme_attach_controller" 00:28:11.924 },{ 00:28:11.924 "params": { 00:28:11.924 "name": "Nvme10", 00:28:11.924 "trtype": "tcp", 00:28:11.924 "traddr": "10.0.0.2", 00:28:11.924 "adrfam": "ipv4", 00:28:11.924 "trsvcid": "4420", 00:28:11.924 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:11.924 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:11.924 "hdgst": false, 00:28:11.924 "ddgst": false 00:28:11.924 }, 00:28:11.924 "method": "bdev_nvme_attach_controller" 00:28:11.924 }' 00:28:11.924 [2024-11-02 11:40:12.094971] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:11.924 [2024-11-02 11:40:12.095046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905340 ] 00:28:11.924 [2024-11-02 11:40:12.167935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.924 [2024-11-02 11:40:12.214900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.831 Running I/O for 1 seconds... 00:28:14.772 1607.00 IOPS, 100.44 MiB/s 00:28:14.772 Latency(us) 00:28:14.772 [2024-11-02T10:40:15.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme1n1 : 1.16 220.24 13.77 0.00 0.00 287769.60 20097.71 279620.27 00:28:14.772 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme2n1 : 1.07 179.00 11.19 0.00 0.00 347800.97 36505.98 278066.82 00:28:14.772 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme3n1 : 1.15 222.78 13.92 0.00 0.00 275004.68 23981.32 273406.48 00:28:14.772 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme4n1 : 1.16 224.57 14.04 0.00 0.00 267860.76 3446.71 259425.47 00:28:14.772 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme5n1 : 1.08 178.00 11.12 0.00 0.00 331446.55 22330.79 284280.60 00:28:14.772 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme6n1 : 1.17 219.06 13.69 0.00 0.00 266410.86 20194.80 301368.51 00:28:14.772 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme7n1 : 1.16 227.69 14.23 0.00 0.00 249591.47 10777.03 242337.56 00:28:14.772 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme8n1 : 1.14 224.42 14.03 0.00 0.00 250468.50 31263.10 262532.36 00:28:14.772 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme9n1 : 1.18 217.45 13.59 0.00 0.00 255095.28 22622.06 293601.28 00:28:14.772 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.772 Verification LBA range: start 0x0 length 0x400 00:28:14.772 Nvme10n1 : 1.17 217.90 13.62 0.00 0.00 250083.37 18738.44 309135.74 00:28:14.772 [2024-11-02T10:40:15.174Z] =================================================================================================================== 00:28:14.772 [2024-11-02T10:40:15.174Z] Total : 2131.10 133.19 0.00 0.00 274833.79 3446.71 309135.74 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.032 rmmod nvme_tcp 00:28:15.032 rmmod nvme_fabrics 00:28:15.032 rmmod nvme_keyring 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3904854 ']' 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3904854 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3904854 ']' 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3904854 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3904854 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3904854' 00:28:15.032 killing process with pid 3904854 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3904854 00:28:15.032 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3904854 00:28:15.598 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.599 11:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.503 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.503 00:28:17.503 real 0m11.986s 00:28:17.503 user 0m35.467s 00:28:17.503 sys 0m3.146s 00:28:17.503 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:17.503 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.503 ************************************ 00:28:17.503 END TEST nvmf_shutdown_tc1 00:28:17.503 ************************************ 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.763 ************************************ 00:28:17.763 START TEST nvmf_shutdown_tc2 00:28:17.763 ************************************ 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.763 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.764 11:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:28:17.764 00:28:17.764 --- 10.0.0.2 ping statistics --- 00:28:17.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.764 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:17.764 00:28:17.764 --- 10.0.0.1 ping statistics --- 00:28:17.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.764 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3906221 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3906221 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3906221 ']' 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:17.764 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.023 [2024-11-02 11:40:18.181116] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:18.023 [2024-11-02 11:40:18.181203] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.023 [2024-11-02 11:40:18.261709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.023 [2024-11-02 11:40:18.310971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.023 [2024-11-02 11:40:18.311038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.023 [2024-11-02 11:40:18.311054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.023 [2024-11-02 11:40:18.311067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.023 [2024-11-02 11:40:18.311079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.023 [2024-11-02 11:40:18.312744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.023 [2024-11-02 11:40:18.312856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.023 [2024-11-02 11:40:18.312925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:18.023 [2024-11-02 11:40:18.312929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.284 [2024-11-02 11:40:18.449278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.284 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.284 Malloc1 00:28:18.284 [2024-11-02 11:40:18.546928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.284 Malloc2 00:28:18.284 Malloc3 00:28:18.284 Malloc4 00:28:18.543 Malloc5 00:28:18.543 Malloc6 00:28:18.543 Malloc7 00:28:18.543 Malloc8 00:28:18.543 Malloc9 00:28:18.802 Malloc10 00:28:18.802 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.802 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:18.802 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.802 11:40:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.802 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3906284 00:28:18.802 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3906284 /var/tmp/bdevperf.sock 00:28:18.802 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3906284 ']' 00:28:18.802 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.802 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.803 { 00:28:18.803 "params": { 00:28:18.803 "name": "Nvme$subsystem", 00:28:18.803 "trtype": "$TEST_TRANSPORT", 00:28:18.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.803 "adrfam": "ipv4", 00:28:18.803 "trsvcid": "$NVMF_PORT", 00:28:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.803 "hdgst": ${hdgst:-false}, 00:28:18.803 "ddgst": ${ddgst:-false} 00:28:18.803 }, 00:28:18.803 "method": "bdev_nvme_attach_controller" 00:28:18.803 } 00:28:18.803 EOF 00:28:18.803 )") 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:18.803 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:18.804 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:18.804 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme1", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme2", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme3", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme4", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme5", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme6", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme7", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme8", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme9", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 },{ 00:28:18.804 "params": { 00:28:18.804 "name": "Nvme10", 00:28:18.804 "trtype": "tcp", 00:28:18.804 "traddr": "10.0.0.2", 00:28:18.804 "adrfam": "ipv4", 00:28:18.804 "trsvcid": "4420", 00:28:18.804 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.804 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.804 "hdgst": false, 00:28:18.804 "ddgst": false 00:28:18.804 }, 00:28:18.804 "method": "bdev_nvme_attach_controller" 00:28:18.804 }' 00:28:18.804 [2024-11-02 11:40:19.070902] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:18.804 [2024-11-02 11:40:19.070975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906284 ] 00:28:18.804 [2024-11-02 11:40:19.143016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.804 [2024-11-02 11:40:19.191942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.735 Running I/O for 10 seconds... 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:20.735 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:20.736 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:20.994 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.994 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:20.994 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:20.994 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:21.252 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3906284 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3906284 ']' 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3906284 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906284 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906284' 00:28:21.512 killing process with pid 3906284 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3906284 00:28:21.512 11:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3906284 00:28:21.512 1939.00 IOPS, 121.19 MiB/s [2024-11-02T10:40:21.914Z] Received shutdown signal, test time was about 1.040726 seconds 00:28:21.512 00:28:21.512 Latency(us) 00:28:21.512 [2024-11-02T10:40:21.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.512 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme1n1 : 0.98 195.75 12.23 0.00 0.00 323393.36 24369.68 251658.24 00:28:21.512 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme2n1 : 1.03 247.53 15.47 0.00 0.00 251175.44 23981.32 254765.13 00:28:21.512 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme3n1 : 0.99 262.04 16.38 0.00 0.00 231776.93 3252.53 248551.35 00:28:21.512 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme4n1 : 1.03 248.26 15.52 0.00 0.00 241254.02 18155.90 257872.02 00:28:21.512 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme5n1 : 1.02 188.52 11.78 0.00 0.00 311353.14 25826.04 279620.27 00:28:21.512 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme6n1 : 1.00 261.42 16.34 0.00 0.00 219003.28 4563.25 248551.35 00:28:21.512 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme7n1 : 1.03 249.58 15.60 0.00 0.00 226183.02 20680.25 259425.47 00:28:21.512 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme8n1 : 0.98 205.45 12.84 0.00 0.00 263452.80 9903.22 254765.13 00:28:21.512 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme9n1 : 1.02 187.72 11.73 0.00 0.00 288850.68 22816.24 292047.83 00:28:21.512 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:21.512 Verification LBA range: start 0x0 length 0x400 00:28:21.512 Nvme10n1 : 1.04 246.18 15.39 0.00 0.00 216432.26 16796.63 250104.79 00:28:21.512 [2024-11-02T10:40:21.914Z] =================================================================================================================== 00:28:21.512 [2024-11-02T10:40:21.914Z] Total : 2292.44 143.28 0.00 0.00 252837.47 3252.53 292047.83 00:28:21.773 11:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3906221 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.713 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.713 rmmod nvme_tcp 00:28:22.713 rmmod nvme_fabrics 00:28:22.971 rmmod nvme_keyring 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3906221 ']' 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3906221 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3906221 ']' 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3906221 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906221 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906221' 00:28:22.971 killing process with pid 3906221 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3906221 00:28:22.971 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3906221 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.229 11:40:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.766 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.766 00:28:25.766 real 0m7.718s 00:28:25.766 user 0m23.486s 00:28:25.766 sys 0m1.628s 00:28:25.766 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.766 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.766 ************************************ 00:28:25.766 END TEST nvmf_shutdown_tc2 00:28:25.766 ************************************ 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.767 ************************************ 00:28:25.767 START TEST nvmf_shutdown_tc3 00:28:25.767 ************************************ 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.767 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:28:25.768 00:28:25.768 --- 10.0.0.2 ping statistics --- 00:28:25.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.768 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:28:25.768 00:28:25.768 --- 10.0.0.1 ping statistics --- 00:28:25.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.768 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3907213 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3907213 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3907213 ']' 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:25.768 11:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:25.768 [2024-11-02 11:40:25.929820] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:25.768 [2024-11-02 11:40:25.929902] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.768 [2024-11-02 11:40:26.014316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.768 [2024-11-02 11:40:26.066717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.768 [2024-11-02 11:40:26.066776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.768 [2024-11-02 11:40:26.066792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.768 [2024-11-02 11:40:26.066805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.768 [2024-11-02 11:40:26.066817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.768 [2024-11-02 11:40:26.068502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.768 [2024-11-02 11:40:26.068532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.768 [2024-11-02 11:40:26.068591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.768 [2024-11-02 11:40:26.068594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.028 [2024-11-02 11:40:26.227998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.028 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.028 Malloc1 00:28:26.028 [2024-11-02 11:40:26.331163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.028 Malloc2 00:28:26.028 Malloc3 00:28:26.288 Malloc4 00:28:26.288 Malloc5 00:28:26.288 Malloc6 00:28:26.288 Malloc7 00:28:26.288 Malloc8 00:28:26.548 Malloc9 00:28:26.548 Malloc10 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3907390 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3907390 /var/tmp/bdevperf.sock 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3907390 ']' 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.548 { 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme$subsystem", 00:28:26.548 "trtype": "$TEST_TRANSPORT", 00:28:26.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "$NVMF_PORT", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.548 "hdgst": ${hdgst:-false}, 00:28:26.548 "ddgst": ${ddgst:-false} 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 } 00:28:26.548 EOF 00:28:26.548 )") 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.548 { 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme$subsystem", 00:28:26.548 "trtype": "$TEST_TRANSPORT", 00:28:26.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "$NVMF_PORT", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.548 "hdgst": ${hdgst:-false}, 00:28:26.548 "ddgst": ${ddgst:-false} 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 } 00:28:26.548 EOF 00:28:26.548 )") 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.548 { 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme$subsystem", 00:28:26.548 "trtype": "$TEST_TRANSPORT", 00:28:26.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "$NVMF_PORT", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.548 "hdgst": ${hdgst:-false}, 00:28:26.548 "ddgst": ${ddgst:-false} 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 } 00:28:26.548 EOF 00:28:26.548 )") 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.548 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.548 { 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme$subsystem", 00:28:26.548 "trtype": "$TEST_TRANSPORT", 00:28:26.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "$NVMF_PORT", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.548 "hdgst": ${hdgst:-false}, 00:28:26.548 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.549 { 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme$subsystem", 00:28:26.549 "trtype": "$TEST_TRANSPORT", 00:28:26.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "$NVMF_PORT", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.549 "hdgst": ${hdgst:-false}, 00:28:26.549 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.549 { 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme$subsystem", 00:28:26.549 "trtype": "$TEST_TRANSPORT", 00:28:26.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "$NVMF_PORT", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.549 "hdgst": ${hdgst:-false}, 00:28:26.549 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.549 { 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme$subsystem", 00:28:26.549 "trtype": "$TEST_TRANSPORT", 00:28:26.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "$NVMF_PORT", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.549 "hdgst": ${hdgst:-false}, 00:28:26.549 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.549 { 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme$subsystem", 00:28:26.549 "trtype": "$TEST_TRANSPORT", 00:28:26.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "$NVMF_PORT", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.549 "hdgst": ${hdgst:-false}, 00:28:26.549 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.549 { 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme$subsystem", 00:28:26.549 "trtype": "$TEST_TRANSPORT", 00:28:26.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "$NVMF_PORT", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.549 "hdgst": ${hdgst:-false}, 00:28:26.549 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.549 { 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme$subsystem", 00:28:26.549 "trtype": "$TEST_TRANSPORT", 00:28:26.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "$NVMF_PORT", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.549 "hdgst": ${hdgst:-false}, 00:28:26.549 "ddgst": ${ddgst:-false} 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 } 00:28:26.549 EOF 00:28:26.549 )") 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:26.549 11:40:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme1", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme2", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme3", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme4", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme5", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme6", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme7", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme8", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.549 "trsvcid": "4420", 00:28:26.549 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:26.549 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:26.549 "hdgst": false, 00:28:26.549 "ddgst": false 00:28:26.549 }, 00:28:26.549 "method": "bdev_nvme_attach_controller" 00:28:26.549 },{ 00:28:26.549 "params": { 00:28:26.549 "name": "Nvme9", 00:28:26.549 "trtype": "tcp", 00:28:26.549 "traddr": "10.0.0.2", 00:28:26.549 "adrfam": "ipv4", 00:28:26.550 "trsvcid": "4420", 00:28:26.550 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:26.550 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:26.550 "hdgst": false, 00:28:26.550 "ddgst": false 00:28:26.550 }, 00:28:26.550 "method": "bdev_nvme_attach_controller" 00:28:26.550 },{ 00:28:26.550 "params": { 00:28:26.550 "name": "Nvme10", 00:28:26.550 "trtype": "tcp", 00:28:26.550 "traddr": "10.0.0.2", 00:28:26.550 "adrfam": "ipv4", 00:28:26.550 "trsvcid": "4420", 00:28:26.550 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:26.550 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:26.550 "hdgst": false, 00:28:26.550 "ddgst": false 00:28:26.550 }, 00:28:26.550 "method": "bdev_nvme_attach_controller" 00:28:26.550 }' 00:28:26.550 [2024-11-02 11:40:26.854232] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:26.550 [2024-11-02 11:40:26.854340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907390 ] 00:28:26.550 [2024-11-02 11:40:26.928217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.807 [2024-11-02 11:40:26.975635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.264 Running I/O for 10 seconds... 00:28:28.522 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:28.522 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:28:28.522 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:28.522 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.522 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:28.796 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3907213 00:28:28.797 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3907213 ']' 00:28:28.797 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3907213 00:28:28.797 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:28:28.797 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:28.797 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3907213 00:28:28.797 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:28.797 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:28.797 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3907213' 00:28:28.797 killing process with pid 3907213 00:28:28.797 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3907213 00:28:28.797 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3907213 00:28:28.797 [2024-11-02 11:40:29.008129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.008988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.009000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf540 is same with the state(6) to be set 00:28:28.797 [2024-11-02 11:40:29.009917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.797 [2024-11-02 11:40:29.009957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.797 [2024-11-02 11:40:29.009987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.797 [2024-11-02 11:40:29.010009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.797 [2024-11-02 11:40:29.010026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.797 [2024-11-02 11:40:29.010040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.797 [2024-11-02 11:40:29.010055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.797 [2024-11-02 11:40:29.010069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.797 [2024-11-02 11:40:29.010085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.797 [2024-11-02 11:40:29.010098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.797 [2024-11-02 11:40:29.010113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.797 [2024-11-02 11:40:29.010127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.010983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.010997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.798 [2024-11-02 11:40:29.011230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.798 [2024-11-02 11:40:29.011252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.798 [2024-11-02 11:40:29.011275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.798 [2024-11-02 11:40:29.011278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1[2024-11-02 11:40:29.011292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.011307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with [2024-11-02 11:40:29.011384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1the state(6) to be set 00:28:28.799 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.011402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-11-02 11:40:29.011481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.011496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with [2024-11-02 11:40:29.011553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1the state(6) to be set 00:28:28.799 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with [2024-11-02 11:40:29.011568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.799 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.011660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.011750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.799 [2024-11-02 11:40:29.011844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.799 [2024-11-02 11:40:29.011856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with [2024-11-02 11:40:29.011857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128the state(6) to be set 00:28:28.799 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.799 [2024-11-02 11:40:29.011870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with [2024-11-02 11:40:29.011872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.799 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.011885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.800 [2024-11-02 11:40:29.011897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.011910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:28.800 [2024-11-02 11:40:29.011958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.011991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bfa10 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3b50 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231c230 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea21c0 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.012675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.800 [2024-11-02 11:40:29.012780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.800 [2024-11-02 11:40:29.012793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2620 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.800 [2024-11-02 11:40:29.013912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.013996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.014398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bff00 is same with the state(6) to be set 00:28:28.801 [2024-11-02 11:40:29.015781] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:28.801 [2024-11-02 11:40:29.015819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3b50 (9): Bad file descriptor 00:28:28.801 [2024-11-02 11:40:29.015908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.015928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.015964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.015980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.015993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.801 [2024-11-02 11:40:29.016338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.801 [2024-11-02 11:40:29.016333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128[2024-11-02 11:40:29.016385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.802 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128the state(6) to be set 00:28:28.802 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.802 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.016600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12the state(6) to be set 00:28:28.802 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.802 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-11-02 11:40:29.016854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.016872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.802 [2024-11-02 11:40:29.016924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.802 [2024-11-02 11:40:29.016931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.802 [2024-11-02 11:40:29.016936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.016949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.016960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.016962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.016976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12the state(6) to be set 00:28:28.803 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.016990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with [2024-11-02 11:40:29.016991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.803 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.017057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-11-02 11:40:29.017133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.017147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c08c0 is same with the state(6) to be set 00:28:28.803 [2024-11-02 11:40:29.017177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.803 [2024-11-02 11:40:29.017839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.803 [2024-11-02 11:40:29.017852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with [2024-11-02 11:40:29.018741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:1the state(6) to be set 00:28:28.804 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1[2024-11-02 11:40:29.018802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with [2024-11-02 11:40:29.018815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:28.804 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.018880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.018943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:1[2024-11-02 11:40:29.018955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.018968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.804 [2024-11-02 11:40:29.018994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.804 [2024-11-02 11:40:29.018998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.804 [2024-11-02 11:40:29.019007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.019031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1[2024-11-02 11:40:29.019056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with [2024-11-02 11:40:29.019117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1the state(6) to be set 00:28:28.805 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:1[2024-11-02 11:40:29.019179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.019194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with [2024-11-02 11:40:29.019291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:1the state(6) to be set 00:28:28.805 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-02 11:40:29.019401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with [2024-11-02 11:40:29.019450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1the state(6) to be set 00:28:28.805 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c0d90 is same with the state(6) to be set 00:28:28.805 [2024-11-02 11:40:29.019497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.805 [2024-11-02 11:40:29.019555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.805 [2024-11-02 11:40:29.019570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.019973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.019986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.020001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.020013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.020028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.020041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.020056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.020069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-02 11:40:29.020100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.806 [2024-11-02 11:40:29.020601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.020999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.806 [2024-11-02 11:40:29.021159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.021380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1260 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.022957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1730 is same with the state(6) to be set 00:28:28.807 [2024-11-02 11:40:29.023320] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:28.807 [2024-11-02 11:40:29.023352] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:28.807 [2024-11-02 11:40:29.023404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9f9b0 (9): Bad file descriptor 00:28:28.807 [2024-11-02 11:40:29.023430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea2620 (9): Bad file descriptor 00:28:28.807 [2024-11-02 11:40:29.023600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.808 [2024-11-02 11:40:29.023628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f3b50 with addr=10.0.0.2, port=4420 00:28:28.808 [2024-11-02 11:40:29.023643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3b50 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.023685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2312e10 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.023833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231c230 (9): Bad file descriptor 00:28:28.808 [2024-11-02 11:40:29.023883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.023982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.023995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dad610 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.024039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2312870 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.024201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4420 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.024372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.808 [2024-11-02 11:40:29.024470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.808 [2024-11-02 11:40:29.024482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdcf0 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.024510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea21c0 (9): Bad file descriptor 00:28:28.808 [2024-11-02 11:40:29.025040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3b50 (9): Bad file descriptor 00:28:28.808 [2024-11-02 11:40:29.025395] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.025465] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.026166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.808 [2024-11-02 11:40:29.026194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea2620 with addr=10.0.0.2, port=4420 00:28:28.808 [2024-11-02 11:40:29.026210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2620 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.026338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.808 [2024-11-02 11:40:29.026364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9f9b0 with addr=10.0.0.2, port=4420 00:28:28.808 [2024-11-02 11:40:29.026384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f9b0 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.026400] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:28.808 [2024-11-02 11:40:29.026413] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:28.808 [2024-11-02 11:40:29.026428] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:28.808 [2024-11-02 11:40:29.026526] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.026605] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.026671] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.026810] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:28.808 [2024-11-02 11:40:29.026838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea2620 (9): Bad file descriptor 00:28:28.808 [2024-11-02 11:40:29.026857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9f9b0 (9): Bad file descriptor 00:28:28.808 [2024-11-02 11:40:29.026980] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.027067] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:28.808 [2024-11-02 11:40:29.027088] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:28.808 [2024-11-02 11:40:29.027102] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:28.808 [2024-11-02 11:40:29.027122] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:28.808 [2024-11-02 11:40:29.027136] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:28.808 [2024-11-02 11:40:29.027148] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:28.808 [2024-11-02 11:40:29.027269] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:28.808 [2024-11-02 11:40:29.027298] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:28.808 [2024-11-02 11:40:29.027316] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:28.808 [2024-11-02 11:40:29.030596] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:28.808 [2024-11-02 11:40:29.030892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.808 [2024-11-02 11:40:29.030920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f3b50 with addr=10.0.0.2, port=4420 00:28:28.808 [2024-11-02 11:40:29.030937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3b50 is same with the state(6) to be set 00:28:28.808 [2024-11-02 11:40:29.030996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3b50 (9): Bad file descriptor 00:28:28.808 [2024-11-02 11:40:29.031054] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:28.808 [2024-11-02 11:40:29.031070] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:28.808 [2024-11-02 11:40:29.031085] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:28.808 [2024-11-02 11:40:29.031144] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:28.808 [2024-11-02 11:40:29.033377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2312e10 (9): Bad file descriptor 00:28:28.809 [2024-11-02 11:40:29.033438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dad610 (9): Bad file descriptor 00:28:28.809 [2024-11-02 11:40:29.033471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2312870 (9): Bad file descriptor 00:28:28.809 [2024-11-02 11:40:29.033504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c4420 (9): Bad file descriptor 00:28:28.809 [2024-11-02 11:40:29.033535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cdcf0 (9): Bad file descriptor 00:28:28.809 [2024-11-02 11:40:29.033694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.033978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.033994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.809 [2024-11-02 11:40:29.034873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-02 11:40:29.034886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.034902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.034916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.034932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.034946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.034961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.034974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.034990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.035685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.035699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7500 is same with the state(6) to be set 00:28:28.810 [2024-11-02 11:40:29.036983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.810 [2024-11-02 11:40:29.037230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-02 11:40:29.037250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.037983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.037996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.811 [2024-11-02 11:40:29.038444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.811 [2024-11-02 11:40:29.038458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.038893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.038908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22947b0 is same with the state(6) to be set 00:28:28.812 [2024-11-02 11:40:29.040160] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:28.812 [2024-11-02 11:40:29.040191] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:28.812 [2024-11-02 11:40:29.040599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.812 [2024-11-02 11:40:29.040630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea21c0 with addr=10.0.0.2, port=4420 00:28:28.812 [2024-11-02 11:40:29.040647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea21c0 is same with the state(6) to be set 00:28:28.812 [2024-11-02 11:40:29.040761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.812 [2024-11-02 11:40:29.040786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231c230 with addr=10.0.0.2, port=4420 00:28:28.812 [2024-11-02 11:40:29.040802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231c230 is same with the state(6) to be set 00:28:28.812 [2024-11-02 11:40:29.041390] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:28.812 [2024-11-02 11:40:29.041415] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:28.812 [2024-11-02 11:40:29.041456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea21c0 (9): Bad file descriptor 00:28:28.812 [2024-11-02 11:40:29.041479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231c230 (9): Bad file descriptor 00:28:28.812 [2024-11-02 11:40:29.041672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.812 [2024-11-02 11:40:29.041699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9f9b0 with addr=10.0.0.2, port=4420 00:28:28.812 [2024-11-02 11:40:29.041715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f9b0 is same with the state(6) to be set 00:28:28.812 [2024-11-02 11:40:29.041834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.812 [2024-11-02 11:40:29.041860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea2620 with addr=10.0.0.2, port=4420 00:28:28.812 [2024-11-02 11:40:29.041881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2620 is same with the state(6) to be set 00:28:28.812 [2024-11-02 11:40:29.041897] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:28.812 [2024-11-02 11:40:29.041910] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:28.812 [2024-11-02 11:40:29.041925] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:28.812 [2024-11-02 11:40:29.041946] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:28.812 [2024-11-02 11:40:29.041960] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:28.812 [2024-11-02 11:40:29.041973] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:28.812 [2024-11-02 11:40:29.042042] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:28.812 [2024-11-02 11:40:29.042067] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:28.812 [2024-11-02 11:40:29.042084] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:28.812 [2024-11-02 11:40:29.042112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9f9b0 (9): Bad file descriptor 00:28:28.812 [2024-11-02 11:40:29.042134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea2620 (9): Bad file descriptor 00:28:28.812 [2024-11-02 11:40:29.042285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.812 [2024-11-02 11:40:29.042312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f3b50 with addr=10.0.0.2, port=4420 00:28:28.812 [2024-11-02 11:40:29.042328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3b50 is same with the state(6) to be set 00:28:28.812 [2024-11-02 11:40:29.042342] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:28.812 [2024-11-02 11:40:29.042355] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:28.812 [2024-11-02 11:40:29.042368] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:28.812 [2024-11-02 11:40:29.042387] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:28.812 [2024-11-02 11:40:29.042401] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:28.812 [2024-11-02 11:40:29.042413] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:28.812 [2024-11-02 11:40:29.042463] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:28.812 [2024-11-02 11:40:29.042483] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:28.812 [2024-11-02 11:40:29.042499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3b50 (9): Bad file descriptor 00:28:28.812 [2024-11-02 11:40:29.042547] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:28.812 [2024-11-02 11:40:29.042564] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:28.812 [2024-11-02 11:40:29.042578] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:28.812 [2024-11-02 11:40:29.042628] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:28.812 [2024-11-02 11:40:29.043524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.043553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.043579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.043594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.043610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.043625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.043640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.812 [2024-11-02 11:40:29.043654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.812 [2024-11-02 11:40:29.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.043977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.043992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.813 [2024-11-02 11:40:29.044832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.813 [2024-11-02 11:40:29.044847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.044862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.044877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.044891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.044906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.044920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.044935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.044948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.044964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.044977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.044992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.045433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.045448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a1ad0 is same with the state(6) to be set 00:28:28.814 [2024-11-02 11:40:29.046719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.046980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.046996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.047010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.047026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.047040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.047060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.047075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.047091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.047104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.047119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.047132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.047147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.814 [2024-11-02 11:40:29.047160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.814 [2024-11-02 11:40:29.047176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.047975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.047989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.815 [2024-11-02 11:40:29.048351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.815 [2024-11-02 11:40:29.048366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.048615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.048629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4520 is same with the state(6) to be set 00:28:28.816 [2024-11-02 11:40:29.049876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.049899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.049919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.049933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.049949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.049962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.049978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.816 [2024-11-02 11:40:29.050760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.816 [2024-11-02 11:40:29.050775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.050980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.050993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.051756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.051770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a5aa0 is same with the state(6) to be set 00:28:28.817 [2024-11-02 11:40:29.053007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.053029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.053050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.053065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.053080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.053093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.053109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.053122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.053138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.053151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.817 [2024-11-02 11:40:29.053166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.817 [2024-11-02 11:40:29.053179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.053980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.053993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.818 [2024-11-02 11:40:29.054348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.818 [2024-11-02 11:40:29.054363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.054966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.054981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a7020 is same with the state(6) to be set 00:28:28.819 [2024-11-02 11:40:29.056212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.819 [2024-11-02 11:40:29.056693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.819 [2024-11-02 11:40:29.056708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.056983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.056998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.820 [2024-11-02 11:40:29.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.820 [2024-11-02 11:40:29.057877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.057891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.057905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.057919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.057934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.057948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.057963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.057976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.057991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.058020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.058033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.058048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.058062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.058077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.821 [2024-11-02 11:40:29.058091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.821 [2024-11-02 11:40:29.058105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a8460 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.059746] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.059778] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.059798] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.059815] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.059953] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:28.821 task offset: 17536 on job bdev=Nvme10n1 fails 00:28:28.821 00:28:28.821 Latency(us) 00:28:28.821 [2024-11-02T10:40:29.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme1n1 ended in about 0.77 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme1n1 : 0.77 166.09 10.38 83.05 0.00 253486.14 7573.05 253211.69 00:28:28.821 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme2n1 ended in about 0.79 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme2n1 : 0.79 162.87 10.18 81.44 0.00 252514.10 18350.08 239230.67 00:28:28.821 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme3n1 ended in about 0.79 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme3n1 : 0.79 162.22 10.14 81.11 0.00 247430.76 17864.63 254765.13 00:28:28.821 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme4n1 ended in about 0.80 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme4n1 : 0.80 160.88 10.06 80.44 0.00 243533.50 18738.44 253211.69 00:28:28.821 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme5n1 ended in about 0.77 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme5n1 : 0.77 165.83 10.36 82.91 0.00 229585.60 7524.50 281173.71 00:28:28.821 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme6n1 ended in about 0.80 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme6n1 : 0.80 160.25 10.02 80.12 0.00 232418.16 34758.35 240784.12 00:28:28.821 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme7n1 ended in about 0.80 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme7n1 : 0.80 159.62 9.98 79.81 0.00 227491.59 20486.07 251658.24 00:28:28.821 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme8n1 ended in about 0.81 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme8n1 : 0.81 158.98 9.94 79.49 0.00 222632.26 18350.08 256318.58 00:28:28.821 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme9n1 ended in about 0.81 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme9n1 : 0.81 79.18 4.95 79.18 0.00 327065.60 44273.21 296708.17 00:28:28.821 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.821 Job: Nvme10n1 ended in about 0.76 seconds with error 00:28:28.821 Verification LBA range: start 0x0 length 0x400 00:28:28.821 Nvme10n1 : 0.76 167.35 10.46 83.68 0.00 197811.83 5097.24 260978.92 00:28:28.821 [2024-11-02T10:40:29.223Z] =================================================================================================================== 00:28:28.821 [2024-11-02T10:40:29.223Z] Total : 1543.28 96.46 811.23 0.00 240511.83 5097.24 296708.17 00:28:28.821 [2024-11-02 11:40:29.086026] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:28.821 [2024-11-02 11:40:29.086115] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.086460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.086496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cdcf0 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.086516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdcf0 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.086654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c4420 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.086697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c4420 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.086835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.086860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dad610 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.086890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dad610 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.087014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.087040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2312e10 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.087056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2312e10 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.088475] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.088504] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.088523] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.088539] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.088562] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:28.821 [2024-11-02 11:40:29.088750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.088779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2312870 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.088795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2312870 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.088818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cdcf0 (9): Bad file descriptor 00:28:28.821 [2024-11-02 11:40:29.088840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c4420 (9): Bad file descriptor 00:28:28.821 [2024-11-02 11:40:29.088857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dad610 (9): Bad file descriptor 00:28:28.821 [2024-11-02 11:40:29.088875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2312e10 (9): Bad file descriptor 00:28:28.821 [2024-11-02 11:40:29.088934] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:28.821 [2024-11-02 11:40:29.088957] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:28.821 [2024-11-02 11:40:29.088976] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:28.821 [2024-11-02 11:40:29.088996] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:28.821 [2024-11-02 11:40:29.089459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.089490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231c230 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.089507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231c230 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.089623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.089649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea21c0 with addr=10.0.0.2, port=4420 00:28:28.821 [2024-11-02 11:40:29.089665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea21c0 is same with the state(6) to be set 00:28:28.821 [2024-11-02 11:40:29.089786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.821 [2024-11-02 11:40:29.089812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea2620 with addr=10.0.0.2, port=4420 00:28:28.822 [2024-11-02 11:40:29.089828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea2620 is same with the state(6) to be set 00:28:28.822 [2024-11-02 11:40:29.089966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.822 [2024-11-02 11:40:29.089992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e9f9b0 with addr=10.0.0.2, port=4420 00:28:28.822 [2024-11-02 11:40:29.090009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9f9b0 is same with the state(6) to be set 00:28:28.822 [2024-11-02 11:40:29.090125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.822 [2024-11-02 11:40:29.090151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f3b50 with addr=10.0.0.2, port=4420 00:28:28.822 [2024-11-02 11:40:29.090166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f3b50 is same with the state(6) to be set 00:28:28.822 [2024-11-02 11:40:29.090184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2312870 (9): Bad file descriptor 00:28:28.822 [2024-11-02 11:40:29.090202] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090215] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090232] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090271] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090287] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090300] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090317] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090331] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090360] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090373] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090386] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090497] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090511] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090523] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231c230 (9): Bad file descriptor 00:28:28.822 [2024-11-02 11:40:29.090558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea21c0 (9): Bad file descriptor 00:28:28.822 [2024-11-02 11:40:29.090576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea2620 (9): Bad file descriptor 00:28:28.822 [2024-11-02 11:40:29.090593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9f9b0 (9): Bad file descriptor 00:28:28.822 [2024-11-02 11:40:29.090622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3b50 (9): Bad file descriptor 00:28:28.822 [2024-11-02 11:40:29.090642] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090655] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090669] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090706] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090726] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090738] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090750] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090766] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090780] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090792] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090807] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090821] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090834] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090850] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090864] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090876] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090892] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:28.822 [2024-11-02 11:40:29.090905] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:28.822 [2024-11-02 11:40:29.090918] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:28.822 [2024-11-02 11:40:29.090956] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090975] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.090988] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.091001] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:28.822 [2024-11-02 11:40:29.091016] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:29.389 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3907390 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3907390 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3907390 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.329 rmmod nvme_tcp 00:28:30.329 rmmod nvme_fabrics 00:28:30.329 rmmod nvme_keyring 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3907213 ']' 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3907213 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3907213 ']' 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3907213 00:28:30.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3907213) - No such process 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3907213 is not found' 00:28:30.329 Process with pid 3907213 is not found 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.329 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.237 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.237 00:28:32.237 real 0m6.916s 00:28:32.237 user 0m15.661s 00:28:32.237 sys 0m1.420s 00:28:32.237 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:32.237 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.237 ************************************ 00:28:32.237 END TEST nvmf_shutdown_tc3 00:28:32.237 ************************************ 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.496 ************************************ 00:28:32.496 START TEST nvmf_shutdown_tc4 00:28:32.496 ************************************ 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:32.496 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.497 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:32.498 00:28:32.498 --- 10.0.0.2 ping statistics --- 00:28:32.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.498 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:28:32.498 00:28:32.498 --- 10.0.0.1 ping statistics --- 00:28:32.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.498 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3908280 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3908280 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3908280 ']' 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:32.498 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:32.757 [2024-11-02 11:40:32.906665] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:32.757 [2024-11-02 11:40:32.906765] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.757 [2024-11-02 11:40:32.988954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.757 [2024-11-02 11:40:33.038351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.757 [2024-11-02 11:40:33.038419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.757 [2024-11-02 11:40:33.038444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.757 [2024-11-02 11:40:33.038458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.757 [2024-11-02 11:40:33.038469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.757 [2024-11-02 11:40:33.040178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.757 [2024-11-02 11:40:33.040294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.757 [2024-11-02 11:40:33.040357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:32.757 [2024-11-02 11:40:33.040359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.757 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:32.757 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:28:32.757 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.757 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.757 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:33.015 [2024-11-02 11:40:33.174760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.015 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:33.015 Malloc1 00:28:33.015 [2024-11-02 11:40:33.267226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.015 Malloc2 00:28:33.015 Malloc3 00:28:33.015 Malloc4 00:28:33.275 Malloc5 00:28:33.275 Malloc6 00:28:33.275 Malloc7 00:28:33.275 Malloc8 00:28:33.275 Malloc9 00:28:33.534 Malloc10 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3908353 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:33.534 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:33.534 [2024-11-02 11:40:33.794648] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3908280 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3908280 ']' 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3908280 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3908280 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3908280' 00:28:38.810 killing process with pid 3908280 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3908280 00:28:38.810 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3908280 00:28:38.810 [2024-11-02 11:40:38.776468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.776746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d140 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.777893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d610 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.780355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191cc70 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.780415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191cc70 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.780432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191cc70 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.780446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191cc70 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.780458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191cc70 is same with the state(6) to be set 00:28:38.810 [2024-11-02 11:40:38.780471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191cc70 is same with the state(6) to be set 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 [2024-11-02 11:40:38.782354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8b280 is same with the state(6) to be set 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 [2024-11-02 11:40:38.782390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8b280 is same with the state(6) to be set 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 starting I/O failed: -6 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 starting I/O failed: -6 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 starting I/O failed: -6 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 starting I/O failed: -6 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 starting I/O failed: -6 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 starting I/O failed: -6 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.810 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 [2024-11-02 11:40:38.787205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8d900 is same with the state(6) to be set 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 [2024-11-02 11:40:38.787603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.811 [2024-11-02 11:40:38.787643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 [2024-11-02 11:40:38.787746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8ca90 is same with the state(6) to be set 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 [2024-11-02 11:40:38.789130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 [2024-11-02 11:40:38.790517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.811 starting I/O failed: -6 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.811 starting I/O failed: -6 00:28:38.811 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 [2024-11-02 11:40:38.792677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.812 NVMe io qpair process completion error 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 [2024-11-02 11:40:38.793878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.812 starting I/O failed: -6 00:28:38.812 [2024-11-02 11:40:38.794064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 starting I/O failed: -6 00:28:38.812 [2024-11-02 11:40:38.794102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 [2024-11-02 11:40:38.794119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 [2024-11-02 11:40:38.794132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 [2024-11-02 11:40:38.794145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 [2024-11-02 11:40:38.794158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 [2024-11-02 11:40:38.794170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 starting I/O failed: -6 00:28:38.812 [2024-11-02 11:40:38.794183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 [2024-11-02 11:40:38.794195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bc20 is same with the state(6) to be set 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 starting I/O failed: -6 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.812 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 [2024-11-02 11:40:38.794851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with Write completed with error (sct=0, sc=8) 00:28:38.813 the state(6) to be set 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 [2024-11-02 11:40:38.794884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 [2024-11-02 11:40:38.794917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 [2024-11-02 11:40:38.794931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 starting I/O failed: -6 00:28:38.813 [2024-11-02 11:40:38.794944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 [2024-11-02 11:40:38.794957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 [2024-11-02 11:40:38.794969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 [2024-11-02 11:40:38.794982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c5c0 is same with the state(6) to be set 00:28:38.813 [2024-11-02 11:40:38.794985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.813 starting I/O failed: -6 00:28:38.813 starting I/O failed: -6 00:28:38.813 starting I/O failed: -6 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 [2024-11-02 11:40:38.796334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.813 starting I/O failed: -6 00:28:38.813 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 [2024-11-02 11:40:38.797978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.814 NVMe io qpair process completion error 00:28:38.814 [2024-11-02 11:40:38.798888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 [2024-11-02 11:40:38.798918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 [2024-11-02 11:40:38.798939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 [2024-11-02 11:40:38.798952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 [2024-11-02 11:40:38.798964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 [2024-11-02 11:40:38.798977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 [2024-11-02 11:40:38.798989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919830 is same with the state(6) to be set 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 [2024-11-02 11:40:38.799764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 [2024-11-02 11:40:38.801011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.814 starting I/O failed: -6 00:28:38.814 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 [2024-11-02 11:40:38.802165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.815 NVMe io qpair process completion error 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 [2024-11-02 11:40:38.803301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 [2024-11-02 11:40:38.804295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 starting I/O failed: -6 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.815 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 [2024-11-02 11:40:38.805470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 [2024-11-02 11:40:38.807153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.816 NVMe io qpair process completion error 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 [2024-11-02 11:40:38.808555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.816 starting I/O failed: -6 00:28:38.816 starting I/O failed: -6 00:28:38.816 starting I/O failed: -6 00:28:38.816 starting I/O failed: -6 00:28:38.816 starting I/O failed: -6 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.816 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 [2024-11-02 11:40:38.810685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.817 starting I/O failed: -6 00:28:38.817 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 [2024-11-02 11:40:38.812701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.818 NVMe io qpair process completion error 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 [2024-11-02 11:40:38.813855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 [2024-11-02 11:40:38.814856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.818 starting I/O failed: -6 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.818 [2024-11-02 11:40:38.816034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.818 starting I/O failed: -6 00:28:38.818 Write completed with error (sct=0, sc=8) 00:28:38.818 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 [2024-11-02 11:40:38.819359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.819 NVMe io qpair process completion error 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 Write completed with error (sct=0, sc=8) 00:28:38.819 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 [2024-11-02 11:40:38.822284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.820 Write completed with error (sct=0, sc=8) 00:28:38.820 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 [2024-11-02 11:40:38.825503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.821 NVMe io qpair process completion error 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 [2024-11-02 11:40:38.826764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 [2024-11-02 11:40:38.827802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.821 starting I/O failed: -6 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 [2024-11-02 11:40:38.828968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.821 Write completed with error (sct=0, sc=8) 00:28:38.821 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 starting I/O failed: -6 00:28:38.822 [2024-11-02 11:40:38.830605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.822 NVMe io qpair process completion error 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.822 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 [2024-11-02 11:40:38.833833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 [2024-11-02 11:40:38.834872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.823 starting I/O failed: -6 00:28:38.823 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 [2024-11-02 11:40:38.836035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.824 starting I/O failed: -6 00:28:38.824 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 starting I/O failed: -6 00:28:38.825 [2024-11-02 11:40:38.838326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.825 NVMe io qpair process completion error 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.825 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Write completed with error (sct=0, sc=8) 00:28:38.826 Initializing NVMe Controllers 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:38.826 Controller IO queue size 128, less than required. 00:28:38.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:38.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:38.826 Initialization complete. Launching workers. 00:28:38.826 ======================================================== 00:28:38.826 Latency(us) 00:28:38.826 Device Information : IOPS MiB/s Average min max 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1731.62 74.41 73942.67 795.54 123028.27 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1704.95 73.26 75405.20 872.06 145655.60 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1692.93 72.74 75660.96 1022.60 144033.11 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1715.67 73.72 74683.93 585.20 142003.24 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1777.97 76.40 72092.72 996.56 115677.80 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1793.93 77.08 71490.43 989.46 131046.17 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1747.80 75.10 73423.66 886.34 133106.77 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1787.81 76.82 71707.79 685.28 116005.90 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1715.88 73.73 74783.78 1076.04 116049.92 00:28:38.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1725.28 74.13 73580.27 659.86 114425.34 00:28:38.826 ======================================================== 00:28:38.826 Total : 17393.85 747.39 73649.54 585.20 145655.60 00:28:38.826 00:28:38.826 [2024-11-02 11:40:38.847017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd56a0 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7470 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7140 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5370 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd77a0 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdab30 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd59d0 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd4fb0 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6e10 is same with the state(6) to be set 00:28:38.826 [2024-11-02 11:40:38.847639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5190 is same with the state(6) to be set 00:28:38.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:39.086 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3908353 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3908353 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3908353 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.025 rmmod nvme_tcp 00:28:40.025 rmmod nvme_fabrics 00:28:40.025 rmmod nvme_keyring 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3908280 ']' 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3908280 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3908280 ']' 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3908280 00:28:40.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3908280) - No such process 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3908280 is not found' 00:28:40.025 Process with pid 3908280 is not found 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.025 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.561 00:28:42.561 real 0m9.682s 00:28:42.561 user 0m21.339s 00:28:42.561 sys 0m6.131s 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.561 ************************************ 00:28:42.561 END TEST nvmf_shutdown_tc4 00:28:42.561 ************************************ 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:42.561 00:28:42.561 real 0m36.642s 00:28:42.561 user 1m36.129s 00:28:42.561 sys 0m12.508s 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.561 ************************************ 00:28:42.561 END TEST nvmf_shutdown 00:28:42.561 ************************************ 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:42.561 00:28:42.561 real 18m6.234s 00:28:42.561 user 50m18.765s 00:28:42.561 sys 3m59.540s 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.561 11:40:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:42.561 ************************************ 00:28:42.561 END TEST nvmf_target_extra 00:28:42.561 ************************************ 00:28:42.561 11:40:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:42.561 11:40:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:42.561 11:40:42 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.561 11:40:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.561 ************************************ 00:28:42.561 START TEST nvmf_host 00:28:42.561 ************************************ 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:42.561 * Looking for test storage... 00:28:42.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:42.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.561 --rc genhtml_branch_coverage=1 00:28:42.561 --rc genhtml_function_coverage=1 00:28:42.561 --rc genhtml_legend=1 00:28:42.561 --rc geninfo_all_blocks=1 00:28:42.561 --rc geninfo_unexecuted_blocks=1 00:28:42.561 00:28:42.561 ' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:42.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.561 --rc genhtml_branch_coverage=1 00:28:42.561 --rc genhtml_function_coverage=1 00:28:42.561 --rc genhtml_legend=1 00:28:42.561 --rc geninfo_all_blocks=1 00:28:42.561 --rc geninfo_unexecuted_blocks=1 00:28:42.561 00:28:42.561 ' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:42.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.561 --rc genhtml_branch_coverage=1 00:28:42.561 --rc genhtml_function_coverage=1 00:28:42.561 --rc genhtml_legend=1 00:28:42.561 --rc geninfo_all_blocks=1 00:28:42.561 --rc geninfo_unexecuted_blocks=1 00:28:42.561 00:28:42.561 ' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:42.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.561 --rc genhtml_branch_coverage=1 00:28:42.561 --rc genhtml_function_coverage=1 00:28:42.561 --rc genhtml_legend=1 00:28:42.561 --rc geninfo_all_blocks=1 00:28:42.561 --rc geninfo_unexecuted_blocks=1 00:28:42.561 00:28:42.561 ' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.561 11:40:42 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:42.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.562 ************************************ 00:28:42.562 START TEST nvmf_multicontroller 00:28:42.562 ************************************ 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:42.562 * Looking for test storage... 00:28:42.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.562 --rc genhtml_branch_coverage=1 00:28:42.562 --rc genhtml_function_coverage=1 00:28:42.562 --rc genhtml_legend=1 00:28:42.562 --rc geninfo_all_blocks=1 00:28:42.562 --rc geninfo_unexecuted_blocks=1 00:28:42.562 00:28:42.562 ' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.562 --rc genhtml_branch_coverage=1 00:28:42.562 --rc genhtml_function_coverage=1 00:28:42.562 --rc genhtml_legend=1 00:28:42.562 --rc geninfo_all_blocks=1 00:28:42.562 --rc geninfo_unexecuted_blocks=1 00:28:42.562 00:28:42.562 ' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.562 --rc genhtml_branch_coverage=1 00:28:42.562 --rc genhtml_function_coverage=1 00:28:42.562 --rc genhtml_legend=1 00:28:42.562 --rc geninfo_all_blocks=1 00:28:42.562 --rc geninfo_unexecuted_blocks=1 00:28:42.562 00:28:42.562 ' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:42.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.562 --rc genhtml_branch_coverage=1 00:28:42.562 --rc genhtml_function_coverage=1 00:28:42.562 --rc genhtml_legend=1 00:28:42.562 --rc geninfo_all_blocks=1 00:28:42.562 --rc geninfo_unexecuted_blocks=1 00:28:42.562 00:28:42.562 ' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.562 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:42.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.563 11:40:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.093 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.094 11:40:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:28:45.094 00:28:45.094 --- 10.0.0.2 ping statistics --- 00:28:45.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.094 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:28:45.094 00:28:45.094 --- 10.0.0.1 ping statistics --- 00:28:45.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.094 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3911146 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3911146 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3911146 ']' 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.094 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.094 [2024-11-02 11:40:45.241346] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:45.094 [2024-11-02 11:40:45.241434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.094 [2024-11-02 11:40:45.316946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.094 [2024-11-02 11:40:45.365450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.095 [2024-11-02 11:40:45.365513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.095 [2024-11-02 11:40:45.365528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.095 [2024-11-02 11:40:45.365540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.095 [2024-11-02 11:40:45.365551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.095 [2024-11-02 11:40:45.367110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.095 [2024-11-02 11:40:45.367163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.095 [2024-11-02 11:40:45.367166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.095 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:45.095 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:28:45.095 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.095 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:45.095 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 [2024-11-02 11:40:45.515729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 Malloc0 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 [2024-11-02 11:40:45.584531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 [2024-11-02 11:40:45.592388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 Malloc1 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3911288 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3911288 /var/tmp/bdevperf.sock 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3911288 ']' 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:45.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:45.353 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.354 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.612 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:45.612 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:28:45.612 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:45.612 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.612 11:40:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.871 NVMe0n1 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.871 1 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.871 request: 00:28:45.871 { 00:28:45.871 "name": "NVMe0", 00:28:45.871 "trtype": "tcp", 00:28:45.871 "traddr": "10.0.0.2", 00:28:45.871 "adrfam": "ipv4", 00:28:45.871 "trsvcid": "4420", 00:28:45.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.871 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:45.871 "hostaddr": "10.0.0.1", 00:28:45.871 "prchk_reftag": false, 00:28:45.871 "prchk_guard": false, 00:28:45.871 "hdgst": false, 00:28:45.871 "ddgst": false, 00:28:45.871 "allow_unrecognized_csi": false, 00:28:45.871 "method": "bdev_nvme_attach_controller", 00:28:45.871 "req_id": 1 00:28:45.871 } 00:28:45.871 Got JSON-RPC error response 00:28:45.871 response: 00:28:45.871 { 00:28:45.871 "code": -114, 00:28:45.871 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:45.871 } 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.871 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.871 request: 00:28:45.871 { 00:28:45.871 "name": "NVMe0", 00:28:45.871 "trtype": "tcp", 00:28:45.871 "traddr": "10.0.0.2", 00:28:45.871 "adrfam": "ipv4", 00:28:45.871 "trsvcid": "4420", 00:28:45.872 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:45.872 "hostaddr": "10.0.0.1", 00:28:45.872 "prchk_reftag": false, 00:28:45.872 "prchk_guard": false, 00:28:45.872 "hdgst": false, 00:28:45.872 "ddgst": false, 00:28:45.872 "allow_unrecognized_csi": false, 00:28:45.872 "method": "bdev_nvme_attach_controller", 00:28:45.872 "req_id": 1 00:28:45.872 } 00:28:45.872 Got JSON-RPC error response 00:28:45.872 response: 00:28:45.872 { 00:28:45.872 "code": -114, 00:28:45.872 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:45.872 } 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 request: 00:28:45.872 { 00:28:45.872 "name": "NVMe0", 00:28:45.872 "trtype": "tcp", 00:28:45.872 "traddr": "10.0.0.2", 00:28:45.872 "adrfam": "ipv4", 00:28:45.872 "trsvcid": "4420", 00:28:45.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.872 "hostaddr": "10.0.0.1", 00:28:45.872 "prchk_reftag": false, 00:28:45.872 "prchk_guard": false, 00:28:45.872 "hdgst": false, 00:28:45.872 "ddgst": false, 00:28:45.872 "multipath": "disable", 00:28:45.872 "allow_unrecognized_csi": false, 00:28:45.872 "method": "bdev_nvme_attach_controller", 00:28:45.872 "req_id": 1 00:28:45.872 } 00:28:45.872 Got JSON-RPC error response 00:28:45.872 response: 00:28:45.872 { 00:28:45.872 "code": -114, 00:28:45.872 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:45.872 } 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 request: 00:28:45.872 { 00:28:45.872 "name": "NVMe0", 00:28:45.872 "trtype": "tcp", 00:28:45.872 "traddr": "10.0.0.2", 00:28:45.872 "adrfam": "ipv4", 00:28:45.872 "trsvcid": "4420", 00:28:45.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.872 "hostaddr": "10.0.0.1", 00:28:45.872 "prchk_reftag": false, 00:28:45.872 "prchk_guard": false, 00:28:45.872 "hdgst": false, 00:28:45.872 "ddgst": false, 00:28:45.872 "multipath": "failover", 00:28:45.872 "allow_unrecognized_csi": false, 00:28:45.872 "method": "bdev_nvme_attach_controller", 00:28:45.872 "req_id": 1 00:28:45.872 } 00:28:45.872 Got JSON-RPC error response 00:28:45.872 response: 00:28:45.872 { 00:28:45.872 "code": -114, 00:28:45.872 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:45.872 } 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.872 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.130 NVMe0n1 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.130 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.388 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:46.388 11:40:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:47.323 { 00:28:47.323 "results": [ 00:28:47.323 { 00:28:47.323 "job": "NVMe0n1", 00:28:47.323 "core_mask": "0x1", 00:28:47.323 "workload": "write", 00:28:47.323 "status": "finished", 00:28:47.323 "queue_depth": 128, 00:28:47.323 "io_size": 4096, 00:28:47.323 "runtime": 1.004979, 00:28:47.323 "iops": 18708.848642608453, 00:28:47.323 "mibps": 73.08144001018927, 00:28:47.323 "io_failed": 0, 00:28:47.323 "io_timeout": 0, 00:28:47.323 "avg_latency_us": 6830.744803035137, 00:28:47.323 "min_latency_us": 2572.8948148148147, 00:28:47.323 "max_latency_us": 11990.660740740741 00:28:47.323 } 00:28:47.323 ], 00:28:47.323 "core_count": 1 00:28:47.323 } 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3911288 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3911288 ']' 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3911288 00:28:47.323 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3911288 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3911288' 00:28:47.582 killing process with pid 3911288 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3911288 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3911288 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:47.582 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:28:47.840 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:28:47.840 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:47.840 [2024-11-02 11:40:45.700677] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:47.840 [2024-11-02 11:40:45.700774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911288 ] 00:28:47.840 [2024-11-02 11:40:45.769729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.840 [2024-11-02 11:40:45.816178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.840 [2024-11-02 11:40:46.564804] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 2dcf4281-d4f7-40e8-97ef-4e858e0fb794 already exists 00:28:47.840 [2024-11-02 11:40:46.564844] bdev.c:7836:bdev_register: *ERROR*: Unable to add uuid:2dcf4281-d4f7-40e8-97ef-4e858e0fb794 alias for bdev NVMe1n1 00:28:47.840 [2024-11-02 11:40:46.564868] bdev_nvme.c:4604:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:47.840 Running I/O for 1 seconds... 00:28:47.840 18674.00 IOPS, 72.95 MiB/s 00:28:47.840 Latency(us) 00:28:47.840 [2024-11-02T10:40:48.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.840 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:47.840 NVMe0n1 : 1.00 18708.85 73.08 0.00 0.00 6830.74 2572.89 11990.66 00:28:47.840 [2024-11-02T10:40:48.243Z] =================================================================================================================== 00:28:47.841 [2024-11-02T10:40:48.243Z] Total : 18708.85 73.08 0.00 0.00 6830.74 2572.89 11990.66 00:28:47.841 Received shutdown signal, test time was about 1.000000 seconds 00:28:47.841 00:28:47.841 Latency(us) 00:28:47.841 [2024-11-02T10:40:48.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.841 [2024-11-02T10:40:48.243Z] =================================================================================================================== 00:28:47.841 [2024-11-02T10:40:48.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.841 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.841 11:40:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.841 rmmod nvme_tcp 00:28:47.841 rmmod nvme_fabrics 00:28:47.841 rmmod nvme_keyring 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3911146 ']' 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3911146 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3911146 ']' 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3911146 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3911146 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3911146' 00:28:47.841 killing process with pid 3911146 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3911146 00:28:47.841 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3911146 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.098 11:40:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.000 00:28:50.000 real 0m7.740s 00:28:50.000 user 0m12.009s 00:28:50.000 sys 0m2.428s 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:50.000 ************************************ 00:28:50.000 END TEST nvmf_multicontroller 00:28:50.000 ************************************ 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:50.000 11:40:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.260 ************************************ 00:28:50.260 START TEST nvmf_aer 00:28:50.260 ************************************ 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:50.260 * Looking for test storage... 00:28:50.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:50.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.260 --rc genhtml_branch_coverage=1 00:28:50.260 --rc genhtml_function_coverage=1 00:28:50.260 --rc genhtml_legend=1 00:28:50.260 --rc geninfo_all_blocks=1 00:28:50.260 --rc geninfo_unexecuted_blocks=1 00:28:50.260 00:28:50.260 ' 00:28:50.260 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:50.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.260 --rc genhtml_branch_coverage=1 00:28:50.260 --rc genhtml_function_coverage=1 00:28:50.261 --rc genhtml_legend=1 00:28:50.261 --rc geninfo_all_blocks=1 00:28:50.261 --rc geninfo_unexecuted_blocks=1 00:28:50.261 00:28:50.261 ' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:50.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.261 --rc genhtml_branch_coverage=1 00:28:50.261 --rc genhtml_function_coverage=1 00:28:50.261 --rc genhtml_legend=1 00:28:50.261 --rc geninfo_all_blocks=1 00:28:50.261 --rc geninfo_unexecuted_blocks=1 00:28:50.261 00:28:50.261 ' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:50.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.261 --rc genhtml_branch_coverage=1 00:28:50.261 --rc genhtml_function_coverage=1 00:28:50.261 --rc genhtml_legend=1 00:28:50.261 --rc geninfo_all_blocks=1 00:28:50.261 --rc geninfo_unexecuted_blocks=1 00:28:50.261 00:28:50.261 ' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:50.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.261 11:40:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.167 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.168 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:28:52.427 00:28:52.427 --- 10.0.0.2 ping statistics --- 00:28:52.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.427 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:52.427 00:28:52.427 --- 10.0.0.1 ping statistics --- 00:28:52.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.427 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3913503 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3913503 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3913503 ']' 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:52.427 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.427 [2024-11-02 11:40:52.680451] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:52.427 [2024-11-02 11:40:52.680522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.427 [2024-11-02 11:40:52.756624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.427 [2024-11-02 11:40:52.809820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.427 [2024-11-02 11:40:52.809883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.427 [2024-11-02 11:40:52.809915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.427 [2024-11-02 11:40:52.809929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.427 [2024-11-02 11:40:52.809940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.427 [2024-11-02 11:40:52.811660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.427 [2024-11-02 11:40:52.811715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.427 [2024-11-02 11:40:52.811841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.427 [2024-11-02 11:40:52.811843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 [2024-11-02 11:40:52.957682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 Malloc0 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.685 11:40:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 [2024-11-02 11:40:53.019078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.685 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:52.685 [ 00:28:52.685 { 00:28:52.685 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:52.685 "subtype": "Discovery", 00:28:52.685 "listen_addresses": [], 00:28:52.685 "allow_any_host": true, 00:28:52.685 "hosts": [] 00:28:52.685 }, 00:28:52.685 { 00:28:52.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.685 "subtype": "NVMe", 00:28:52.685 "listen_addresses": [ 00:28:52.685 { 00:28:52.685 "trtype": "TCP", 00:28:52.685 "adrfam": "IPv4", 00:28:52.686 "traddr": "10.0.0.2", 00:28:52.686 "trsvcid": "4420" 00:28:52.686 } 00:28:52.686 ], 00:28:52.686 "allow_any_host": true, 00:28:52.686 "hosts": [], 00:28:52.686 "serial_number": "SPDK00000000000001", 00:28:52.686 "model_number": "SPDK bdev Controller", 00:28:52.686 "max_namespaces": 2, 00:28:52.686 "min_cntlid": 1, 00:28:52.686 "max_cntlid": 65519, 00:28:52.686 "namespaces": [ 00:28:52.686 { 00:28:52.686 "nsid": 1, 00:28:52.686 "bdev_name": "Malloc0", 00:28:52.686 "name": "Malloc0", 00:28:52.686 "nguid": "AD67489AAC474E74B0218D4333F22F1C", 00:28:52.686 "uuid": "ad67489a-ac47-4e74-b021-8d4333f22f1c" 00:28:52.686 } 00:28:52.686 ] 00:28:52.686 } 00:28:52.686 ] 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3913532 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:28:52.686 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:28:52.944 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.204 Malloc1 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.204 [ 00:28:53.204 { 00:28:53.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:53.204 "subtype": "Discovery", 00:28:53.204 "listen_addresses": [], 00:28:53.204 "allow_any_host": true, 00:28:53.204 "hosts": [] 00:28:53.204 }, 00:28:53.204 { 00:28:53.204 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.204 "subtype": "NVMe", 00:28:53.204 "listen_addresses": [ 00:28:53.204 { 00:28:53.204 "trtype": "TCP", 00:28:53.204 "adrfam": "IPv4", 00:28:53.204 "traddr": "10.0.0.2", 00:28:53.204 "trsvcid": "4420" 00:28:53.204 } 00:28:53.204 ], 00:28:53.204 "allow_any_host": true, 00:28:53.204 "hosts": [], 00:28:53.204 "serial_number": "SPDK00000000000001", 00:28:53.204 "model_number": "SPDK bdev Controller", 00:28:53.204 "max_namespaces": 2, 00:28:53.204 "min_cntlid": 1, 00:28:53.204 "max_cntlid": 65519, 00:28:53.204 "namespaces": [ 00:28:53.204 { 00:28:53.204 "nsid": 1, 00:28:53.204 "bdev_name": "Malloc0", 00:28:53.204 "name": "Malloc0", 00:28:53.204 "nguid": "AD67489AAC474E74B0218D4333F22F1C", 00:28:53.204 "uuid": "ad67489a-ac47-4e74-b021-8d4333f22f1c" 00:28:53.204 }, 00:28:53.204 { 00:28:53.204 "nsid": 2, 00:28:53.204 "bdev_name": "Malloc1", 00:28:53.204 "name": "Malloc1", 00:28:53.204 "nguid": "C176A7BAC76B4A87B9DE555DD0F6EB3B", 00:28:53.204 "uuid": "c176a7ba-c76b-4a87-b9de-555dd0f6eb3b" 00:28:53.204 } 00:28:53.204 ] 00:28:53.204 } 00:28:53.204 ] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3913532 00:28:53.204 Asynchronous Event Request test 00:28:53.204 Attaching to 10.0.0.2 00:28:53.204 Attached to 10.0.0.2 00:28:53.204 Registering asynchronous event callbacks... 00:28:53.204 Starting namespace attribute notice tests for all controllers... 00:28:53.204 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:53.204 aer_cb - Changed Namespace 00:28:53.204 Cleaning up... 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.204 rmmod nvme_tcp 00:28:53.204 rmmod nvme_fabrics 00:28:53.204 rmmod nvme_keyring 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3913503 ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3913503 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3913503 ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3913503 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3913503 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3913503' 00:28:53.204 killing process with pid 3913503 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3913503 00:28:53.204 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3913503 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.464 11:40:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.003 11:40:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.003 00:28:56.003 real 0m5.460s 00:28:56.003 user 0m4.749s 00:28:56.003 sys 0m1.846s 00:28:56.003 11:40:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:56.003 11:40:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.003 ************************************ 00:28:56.003 END TEST nvmf_aer 00:28:56.003 ************************************ 00:28:56.003 11:40:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.004 ************************************ 00:28:56.004 START TEST nvmf_async_init 00:28:56.004 ************************************ 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:56.004 * Looking for test storage... 00:28:56.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:28:56.004 11:40:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:56.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.004 --rc genhtml_branch_coverage=1 00:28:56.004 --rc genhtml_function_coverage=1 00:28:56.004 --rc genhtml_legend=1 00:28:56.004 --rc geninfo_all_blocks=1 00:28:56.004 --rc geninfo_unexecuted_blocks=1 00:28:56.004 00:28:56.004 ' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:56.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.004 --rc genhtml_branch_coverage=1 00:28:56.004 --rc genhtml_function_coverage=1 00:28:56.004 --rc genhtml_legend=1 00:28:56.004 --rc geninfo_all_blocks=1 00:28:56.004 --rc geninfo_unexecuted_blocks=1 00:28:56.004 00:28:56.004 ' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:56.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.004 --rc genhtml_branch_coverage=1 00:28:56.004 --rc genhtml_function_coverage=1 00:28:56.004 --rc genhtml_legend=1 00:28:56.004 --rc geninfo_all_blocks=1 00:28:56.004 --rc geninfo_unexecuted_blocks=1 00:28:56.004 00:28:56.004 ' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:56.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.004 --rc genhtml_branch_coverage=1 00:28:56.004 --rc genhtml_function_coverage=1 00:28:56.004 --rc genhtml_legend=1 00:28:56.004 --rc geninfo_all_blocks=1 00:28:56.004 --rc geninfo_unexecuted_blocks=1 00:28:56.004 00:28:56.004 ' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.004 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8edf3b8a18ad4957a86a79ffc4c612c0 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.005 11:40:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:57.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:57.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:57.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:57.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:28:57.905 00:28:57.905 --- 10.0.0.2 ping statistics --- 00:28:57.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.905 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:28:57.905 00:28:57.905 --- 10.0.0.1 ping statistics --- 00:28:57.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.905 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:57.905 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3915590 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3915590 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3915590 ']' 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:57.906 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.164 [2024-11-02 11:40:58.325353] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:28:58.164 [2024-11-02 11:40:58.325428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.164 [2024-11-02 11:40:58.397600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.164 [2024-11-02 11:40:58.441503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.164 [2024-11-02 11:40:58.441576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.164 [2024-11-02 11:40:58.441599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.164 [2024-11-02 11:40:58.441611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.164 [2024-11-02 11:40:58.441621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.164 [2024-11-02 11:40:58.442210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.164 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.164 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:28:58.164 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.164 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.164 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 [2024-11-02 11:40:58.579037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 null0 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8edf3b8a18ad4957a86a79ffc4c612c0 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.423 [2024-11-02 11:40:58.619340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.423 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.681 nvme0n1 00:28:58.681 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.681 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:58.681 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.681 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.681 [ 00:28:58.681 { 00:28:58.681 "name": "nvme0n1", 00:28:58.681 "aliases": [ 00:28:58.681 "8edf3b8a-18ad-4957-a86a-79ffc4c612c0" 00:28:58.681 ], 00:28:58.681 "product_name": "NVMe disk", 00:28:58.681 "block_size": 512, 00:28:58.681 "num_blocks": 2097152, 00:28:58.681 "uuid": "8edf3b8a-18ad-4957-a86a-79ffc4c612c0", 00:28:58.681 "numa_id": 0, 00:28:58.681 "assigned_rate_limits": { 00:28:58.681 "rw_ios_per_sec": 0, 00:28:58.681 "rw_mbytes_per_sec": 0, 00:28:58.681 "r_mbytes_per_sec": 0, 00:28:58.681 "w_mbytes_per_sec": 0 00:28:58.681 }, 00:28:58.681 "claimed": false, 00:28:58.681 "zoned": false, 00:28:58.681 "supported_io_types": { 00:28:58.681 "read": true, 00:28:58.681 "write": true, 00:28:58.681 "unmap": false, 00:28:58.681 "flush": true, 00:28:58.681 "reset": true, 00:28:58.681 "nvme_admin": true, 00:28:58.681 "nvme_io": true, 00:28:58.681 "nvme_io_md": false, 00:28:58.681 "write_zeroes": true, 00:28:58.681 "zcopy": false, 00:28:58.681 "get_zone_info": false, 00:28:58.681 "zone_management": false, 00:28:58.681 "zone_append": false, 00:28:58.681 "compare": true, 00:28:58.681 "compare_and_write": true, 00:28:58.682 "abort": true, 00:28:58.682 "seek_hole": false, 00:28:58.682 "seek_data": false, 00:28:58.682 "copy": true, 00:28:58.682 "nvme_iov_md": false 00:28:58.682 }, 00:28:58.682 "memory_domains": [ 00:28:58.682 { 00:28:58.682 "dma_device_id": "system", 00:28:58.682 "dma_device_type": 1 00:28:58.682 } 00:28:58.682 ], 00:28:58.682 "driver_specific": { 00:28:58.682 "nvme": [ 00:28:58.682 { 00:28:58.682 "trid": { 00:28:58.682 "trtype": "TCP", 00:28:58.682 "adrfam": "IPv4", 00:28:58.682 "traddr": "10.0.0.2", 00:28:58.682 "trsvcid": "4420", 00:28:58.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:58.682 }, 00:28:58.682 "ctrlr_data": { 00:28:58.682 "cntlid": 1, 00:28:58.682 "vendor_id": "0x8086", 00:28:58.682 "model_number": "SPDK bdev Controller", 00:28:58.682 "serial_number": "00000000000000000000", 00:28:58.682 "firmware_revision": "25.01", 00:28:58.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.682 "oacs": { 00:28:58.682 "security": 0, 00:28:58.682 "format": 0, 00:28:58.682 "firmware": 0, 00:28:58.682 "ns_manage": 0 00:28:58.682 }, 00:28:58.682 "multi_ctrlr": true, 00:28:58.682 "ana_reporting": false 00:28:58.682 }, 00:28:58.682 "vs": { 00:28:58.682 "nvme_version": "1.3" 00:28:58.682 }, 00:28:58.682 "ns_data": { 00:28:58.682 "id": 1, 00:28:58.682 "can_share": true 00:28:58.682 } 00:28:58.682 } 00:28:58.682 ], 00:28:58.682 "mp_policy": "active_passive" 00:28:58.682 } 00:28:58.682 } 00:28:58.682 ] 00:28:58.682 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:58.682 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.682 [2024-11-02 11:40:58.871912] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:58.682 [2024-11-02 11:40:58.872011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238a4a0 (9): Bad file descriptor 00:28:58.682 [2024-11-02 11:40:59.014440] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.682 [ 00:28:58.682 { 00:28:58.682 "name": "nvme0n1", 00:28:58.682 "aliases": [ 00:28:58.682 "8edf3b8a-18ad-4957-a86a-79ffc4c612c0" 00:28:58.682 ], 00:28:58.682 "product_name": "NVMe disk", 00:28:58.682 "block_size": 512, 00:28:58.682 "num_blocks": 2097152, 00:28:58.682 "uuid": "8edf3b8a-18ad-4957-a86a-79ffc4c612c0", 00:28:58.682 "numa_id": 0, 00:28:58.682 "assigned_rate_limits": { 00:28:58.682 "rw_ios_per_sec": 0, 00:28:58.682 "rw_mbytes_per_sec": 0, 00:28:58.682 "r_mbytes_per_sec": 0, 00:28:58.682 "w_mbytes_per_sec": 0 00:28:58.682 }, 00:28:58.682 "claimed": false, 00:28:58.682 "zoned": false, 00:28:58.682 "supported_io_types": { 00:28:58.682 "read": true, 00:28:58.682 "write": true, 00:28:58.682 "unmap": false, 00:28:58.682 "flush": true, 00:28:58.682 "reset": true, 00:28:58.682 "nvme_admin": true, 00:28:58.682 "nvme_io": true, 00:28:58.682 "nvme_io_md": false, 00:28:58.682 "write_zeroes": true, 00:28:58.682 "zcopy": false, 00:28:58.682 "get_zone_info": false, 00:28:58.682 "zone_management": false, 00:28:58.682 "zone_append": false, 00:28:58.682 "compare": true, 00:28:58.682 "compare_and_write": true, 00:28:58.682 "abort": true, 00:28:58.682 "seek_hole": false, 00:28:58.682 "seek_data": false, 00:28:58.682 "copy": true, 00:28:58.682 "nvme_iov_md": false 00:28:58.682 }, 00:28:58.682 "memory_domains": [ 00:28:58.682 { 00:28:58.682 "dma_device_id": "system", 00:28:58.682 "dma_device_type": 1 00:28:58.682 } 00:28:58.682 ], 00:28:58.682 "driver_specific": { 00:28:58.682 "nvme": [ 00:28:58.682 { 00:28:58.682 "trid": { 00:28:58.682 "trtype": "TCP", 00:28:58.682 "adrfam": "IPv4", 00:28:58.682 "traddr": "10.0.0.2", 00:28:58.682 "trsvcid": "4420", 00:28:58.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:58.682 }, 00:28:58.682 "ctrlr_data": { 00:28:58.682 "cntlid": 2, 00:28:58.682 "vendor_id": "0x8086", 00:28:58.682 "model_number": "SPDK bdev Controller", 00:28:58.682 "serial_number": "00000000000000000000", 00:28:58.682 "firmware_revision": "25.01", 00:28:58.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.682 "oacs": { 00:28:58.682 "security": 0, 00:28:58.682 "format": 0, 00:28:58.682 "firmware": 0, 00:28:58.682 "ns_manage": 0 00:28:58.682 }, 00:28:58.682 "multi_ctrlr": true, 00:28:58.682 "ana_reporting": false 00:28:58.682 }, 00:28:58.682 "vs": { 00:28:58.682 "nvme_version": "1.3" 00:28:58.682 }, 00:28:58.682 "ns_data": { 00:28:58.682 "id": 1, 00:28:58.682 "can_share": true 00:28:58.682 } 00:28:58.682 } 00:28:58.682 ], 00:28:58.682 "mp_policy": "active_passive" 00:28:58.682 } 00:28:58.682 } 00:28:58.682 ] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.an5CbPsqte 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.an5CbPsqte 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.an5CbPsqte 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.682 [2024-11-02 11:40:59.076602] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:58.682 [2024-11-02 11:40:59.076758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.682 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.941 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.941 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:58.941 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.941 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.942 [2024-11-02 11:40:59.092649] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:58.942 nvme0n1 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.942 [ 00:28:58.942 { 00:28:58.942 "name": "nvme0n1", 00:28:58.942 "aliases": [ 00:28:58.942 "8edf3b8a-18ad-4957-a86a-79ffc4c612c0" 00:28:58.942 ], 00:28:58.942 "product_name": "NVMe disk", 00:28:58.942 "block_size": 512, 00:28:58.942 "num_blocks": 2097152, 00:28:58.942 "uuid": "8edf3b8a-18ad-4957-a86a-79ffc4c612c0", 00:28:58.942 "numa_id": 0, 00:28:58.942 "assigned_rate_limits": { 00:28:58.942 "rw_ios_per_sec": 0, 00:28:58.942 "rw_mbytes_per_sec": 0, 00:28:58.942 "r_mbytes_per_sec": 0, 00:28:58.942 "w_mbytes_per_sec": 0 00:28:58.942 }, 00:28:58.942 "claimed": false, 00:28:58.942 "zoned": false, 00:28:58.942 "supported_io_types": { 00:28:58.942 "read": true, 00:28:58.942 "write": true, 00:28:58.942 "unmap": false, 00:28:58.942 "flush": true, 00:28:58.942 "reset": true, 00:28:58.942 "nvme_admin": true, 00:28:58.942 "nvme_io": true, 00:28:58.942 "nvme_io_md": false, 00:28:58.942 "write_zeroes": true, 00:28:58.942 "zcopy": false, 00:28:58.942 "get_zone_info": false, 00:28:58.942 "zone_management": false, 00:28:58.942 "zone_append": false, 00:28:58.942 "compare": true, 00:28:58.942 "compare_and_write": true, 00:28:58.942 "abort": true, 00:28:58.942 "seek_hole": false, 00:28:58.942 "seek_data": false, 00:28:58.942 "copy": true, 00:28:58.942 "nvme_iov_md": false 00:28:58.942 }, 00:28:58.942 "memory_domains": [ 00:28:58.942 { 00:28:58.942 "dma_device_id": "system", 00:28:58.942 "dma_device_type": 1 00:28:58.942 } 00:28:58.942 ], 00:28:58.942 "driver_specific": { 00:28:58.942 "nvme": [ 00:28:58.942 { 00:28:58.942 "trid": { 00:28:58.942 "trtype": "TCP", 00:28:58.942 "adrfam": "IPv4", 00:28:58.942 "traddr": "10.0.0.2", 00:28:58.942 "trsvcid": "4421", 00:28:58.942 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:58.942 }, 00:28:58.942 "ctrlr_data": { 00:28:58.942 "cntlid": 3, 00:28:58.942 "vendor_id": "0x8086", 00:28:58.942 "model_number": "SPDK bdev Controller", 00:28:58.942 "serial_number": "00000000000000000000", 00:28:58.942 "firmware_revision": "25.01", 00:28:58.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.942 "oacs": { 00:28:58.942 "security": 0, 00:28:58.942 "format": 0, 00:28:58.942 "firmware": 0, 00:28:58.942 "ns_manage": 0 00:28:58.942 }, 00:28:58.942 "multi_ctrlr": true, 00:28:58.942 "ana_reporting": false 00:28:58.942 }, 00:28:58.942 "vs": { 00:28:58.942 "nvme_version": "1.3" 00:28:58.942 }, 00:28:58.942 "ns_data": { 00:28:58.942 "id": 1, 00:28:58.942 "can_share": true 00:28:58.942 } 00:28:58.942 } 00:28:58.942 ], 00:28:58.942 "mp_policy": "active_passive" 00:28:58.942 } 00:28:58.942 } 00:28:58.942 ] 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.an5CbPsqte 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.942 rmmod nvme_tcp 00:28:58.942 rmmod nvme_fabrics 00:28:58.942 rmmod nvme_keyring 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3915590 ']' 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3915590 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3915590 ']' 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3915590 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3915590 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3915590' 00:28:58.942 killing process with pid 3915590 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3915590 00:28:58.942 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3915590 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.202 11:40:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.738 00:29:01.738 real 0m5.651s 00:29:01.738 user 0m2.147s 00:29:01.738 sys 0m1.920s 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:01.738 ************************************ 00:29:01.738 END TEST nvmf_async_init 00:29:01.738 ************************************ 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.738 ************************************ 00:29:01.738 START TEST dma 00:29:01.738 ************************************ 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:01.738 * Looking for test storage... 00:29:01.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:01.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.738 --rc genhtml_branch_coverage=1 00:29:01.738 --rc genhtml_function_coverage=1 00:29:01.738 --rc genhtml_legend=1 00:29:01.738 --rc geninfo_all_blocks=1 00:29:01.738 --rc geninfo_unexecuted_blocks=1 00:29:01.738 00:29:01.738 ' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:01.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.738 --rc genhtml_branch_coverage=1 00:29:01.738 --rc genhtml_function_coverage=1 00:29:01.738 --rc genhtml_legend=1 00:29:01.738 --rc geninfo_all_blocks=1 00:29:01.738 --rc geninfo_unexecuted_blocks=1 00:29:01.738 00:29:01.738 ' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:01.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.738 --rc genhtml_branch_coverage=1 00:29:01.738 --rc genhtml_function_coverage=1 00:29:01.738 --rc genhtml_legend=1 00:29:01.738 --rc geninfo_all_blocks=1 00:29:01.738 --rc geninfo_unexecuted_blocks=1 00:29:01.738 00:29:01.738 ' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:01.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.738 --rc genhtml_branch_coverage=1 00:29:01.738 --rc genhtml_function_coverage=1 00:29:01.738 --rc genhtml_legend=1 00:29:01.738 --rc geninfo_all_blocks=1 00:29:01.738 --rc geninfo_unexecuted_blocks=1 00:29:01.738 00:29:01.738 ' 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:01.738 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:01.739 00:29:01.739 real 0m0.163s 00:29:01.739 user 0m0.108s 00:29:01.739 sys 0m0.064s 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:01.739 ************************************ 00:29:01.739 END TEST dma 00:29:01.739 ************************************ 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.739 ************************************ 00:29:01.739 START TEST nvmf_identify 00:29:01.739 ************************************ 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:01.739 * Looking for test storage... 00:29:01.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.739 --rc genhtml_branch_coverage=1 00:29:01.739 --rc genhtml_function_coverage=1 00:29:01.739 --rc genhtml_legend=1 00:29:01.739 --rc geninfo_all_blocks=1 00:29:01.739 --rc geninfo_unexecuted_blocks=1 00:29:01.739 00:29:01.739 ' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.739 --rc genhtml_branch_coverage=1 00:29:01.739 --rc genhtml_function_coverage=1 00:29:01.739 --rc genhtml_legend=1 00:29:01.739 --rc geninfo_all_blocks=1 00:29:01.739 --rc geninfo_unexecuted_blocks=1 00:29:01.739 00:29:01.739 ' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.739 --rc genhtml_branch_coverage=1 00:29:01.739 --rc genhtml_function_coverage=1 00:29:01.739 --rc genhtml_legend=1 00:29:01.739 --rc geninfo_all_blocks=1 00:29:01.739 --rc geninfo_unexecuted_blocks=1 00:29:01.739 00:29:01.739 ' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:01.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.739 --rc genhtml_branch_coverage=1 00:29:01.739 --rc genhtml_function_coverage=1 00:29:01.739 --rc genhtml_legend=1 00:29:01.739 --rc geninfo_all_blocks=1 00:29:01.739 --rc geninfo_unexecuted_blocks=1 00:29:01.739 00:29:01.739 ' 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.739 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.740 11:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.271 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.271 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.271 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.271 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:29:04.271 00:29:04.271 --- 10.0.0.2 ping statistics --- 00:29:04.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.271 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:29:04.271 00:29:04.271 --- 10.0.0.1 ping statistics --- 00:29:04.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.271 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:04.271 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3917734 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3917734 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3917734 ']' 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.272 [2024-11-02 11:41:04.278732] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:29:04.272 [2024-11-02 11:41:04.278821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.272 [2024-11-02 11:41:04.371513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.272 [2024-11-02 11:41:04.424840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.272 [2024-11-02 11:41:04.424902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.272 [2024-11-02 11:41:04.424936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.272 [2024-11-02 11:41:04.424959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.272 [2024-11-02 11:41:04.424988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.272 [2024-11-02 11:41:04.427076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.272 [2024-11-02 11:41:04.427140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.272 [2024-11-02 11:41:04.427205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.272 [2024-11-02 11:41:04.427212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.272 [2024-11-02 11:41:04.661378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.272 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 Malloc0 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 [2024-11-02 11:41:04.747083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:04.553 [ 00:29:04.553 { 00:29:04.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:04.553 "subtype": "Discovery", 00:29:04.553 "listen_addresses": [ 00:29:04.553 { 00:29:04.553 "trtype": "TCP", 00:29:04.553 "adrfam": "IPv4", 00:29:04.553 "traddr": "10.0.0.2", 00:29:04.553 "trsvcid": "4420" 00:29:04.553 } 00:29:04.553 ], 00:29:04.553 "allow_any_host": true, 00:29:04.553 "hosts": [] 00:29:04.553 }, 00:29:04.553 { 00:29:04.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.553 "subtype": "NVMe", 00:29:04.553 "listen_addresses": [ 00:29:04.553 { 00:29:04.553 "trtype": "TCP", 00:29:04.553 "adrfam": "IPv4", 00:29:04.553 "traddr": "10.0.0.2", 00:29:04.553 "trsvcid": "4420" 00:29:04.553 } 00:29:04.553 ], 00:29:04.553 "allow_any_host": true, 00:29:04.553 "hosts": [], 00:29:04.553 "serial_number": "SPDK00000000000001", 00:29:04.553 "model_number": "SPDK bdev Controller", 00:29:04.553 "max_namespaces": 32, 00:29:04.553 "min_cntlid": 1, 00:29:04.553 "max_cntlid": 65519, 00:29:04.553 "namespaces": [ 00:29:04.553 { 00:29:04.553 "nsid": 1, 00:29:04.553 "bdev_name": "Malloc0", 00:29:04.553 "name": "Malloc0", 00:29:04.553 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:04.553 "eui64": "ABCDEF0123456789", 00:29:04.553 "uuid": "b98313d0-12fd-4058-8cc6-4d474fcf5474" 00:29:04.553 } 00:29:04.553 ] 00:29:04.553 } 00:29:04.553 ] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.553 11:41:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:04.553 [2024-11-02 11:41:04.786124] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:29:04.553 [2024-11-02 11:41:04.786160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917772 ] 00:29:04.553 [2024-11-02 11:41:04.837392] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:04.553 [2024-11-02 11:41:04.837460] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:04.553 [2024-11-02 11:41:04.837475] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:04.553 [2024-11-02 11:41:04.837492] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:04.553 [2024-11-02 11:41:04.837506] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:04.553 [2024-11-02 11:41:04.841718] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:04.553 [2024-11-02 11:41:04.841783] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20a7d80 0 00:29:04.553 [2024-11-02 11:41:04.841953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:04.553 [2024-11-02 11:41:04.841975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:04.553 [2024-11-02 11:41:04.841984] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:04.553 [2024-11-02 11:41:04.841990] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:04.553 [2024-11-02 11:41:04.842025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.842038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.842045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.553 [2024-11-02 11:41:04.842063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:04.553 [2024-11-02 11:41:04.842088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.553 [2024-11-02 11:41:04.849271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.553 [2024-11-02 11:41:04.849290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.553 [2024-11-02 11:41:04.849306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.849314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.553 [2024-11-02 11:41:04.849334] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:04.553 [2024-11-02 11:41:04.849346] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:04.553 [2024-11-02 11:41:04.849355] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:04.553 [2024-11-02 11:41:04.849376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.849386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.849392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.553 [2024-11-02 11:41:04.849403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-11-02 11:41:04.849429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.553 [2024-11-02 11:41:04.849568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.553 [2024-11-02 11:41:04.849584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.553 [2024-11-02 11:41:04.849591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.849597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.553 [2024-11-02 11:41:04.849607] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:04.553 [2024-11-02 11:41:04.849620] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:04.553 [2024-11-02 11:41:04.849632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.849639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.553 [2024-11-02 11:41:04.849646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.553 [2024-11-02 11:41:04.849656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.553 [2024-11-02 11:41:04.849682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.553 [2024-11-02 11:41:04.849802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.849817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.849824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.849830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.849839] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:04.554 [2024-11-02 11:41:04.849853] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:04.554 [2024-11-02 11:41:04.849866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.849873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.849880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.554 [2024-11-02 11:41:04.849890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.554 [2024-11-02 11:41:04.849911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.554 [2024-11-02 11:41:04.850026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.850038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.850044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.850060] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:04.554 [2024-11-02 11:41:04.850081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.554 [2024-11-02 11:41:04.850108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.554 [2024-11-02 11:41:04.850128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.554 [2024-11-02 11:41:04.850248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.850272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.850280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.850301] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:04.554 [2024-11-02 11:41:04.850310] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:04.554 [2024-11-02 11:41:04.850323] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:04.554 [2024-11-02 11:41:04.850433] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:04.554 [2024-11-02 11:41:04.850441] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:04.554 [2024-11-02 11:41:04.850455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.554 [2024-11-02 11:41:04.850484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.554 [2024-11-02 11:41:04.850506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.554 [2024-11-02 11:41:04.850635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.850647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.850654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.850669] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:04.554 [2024-11-02 11:41:04.850684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.554 [2024-11-02 11:41:04.850710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.554 [2024-11-02 11:41:04.850730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.554 [2024-11-02 11:41:04.850849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.850864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.850870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.850885] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:04.554 [2024-11-02 11:41:04.850894] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:04.554 [2024-11-02 11:41:04.850907] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:04.554 [2024-11-02 11:41:04.850924] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:04.554 [2024-11-02 11:41:04.850940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.850948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.554 [2024-11-02 11:41:04.850959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.554 [2024-11-02 11:41:04.850979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.554 [2024-11-02 11:41:04.851138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.554 [2024-11-02 11:41:04.851155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.554 [2024-11-02 11:41:04.851162] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851169] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20a7d80): datao=0, datal=4096, cccid=0 00:29:04.554 [2024-11-02 11:41:04.851176] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113480) on tqpair(0x20a7d80): expected_datao=0, payload_size=4096 00:29:04.554 [2024-11-02 11:41:04.851184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851195] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.851230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.851237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.851263] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:04.554 [2024-11-02 11:41:04.851274] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:04.554 [2024-11-02 11:41:04.851281] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:04.554 [2024-11-02 11:41:04.851290] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:04.554 [2024-11-02 11:41:04.851305] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:04.554 [2024-11-02 11:41:04.851313] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:04.554 [2024-11-02 11:41:04.851327] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:04.554 [2024-11-02 11:41:04.851340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.554 [2024-11-02 11:41:04.851365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:04.554 [2024-11-02 11:41:04.851387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.554 [2024-11-02 11:41:04.851512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.554 [2024-11-02 11:41:04.851524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.554 [2024-11-02 11:41:04.851530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.554 [2024-11-02 11:41:04.851561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.554 [2024-11-02 11:41:04.851576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.851586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.555 [2024-11-02 11:41:04.851596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.851617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.555 [2024-11-02 11:41:04.851627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.851648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.555 [2024-11-02 11:41:04.851661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.851684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.555 [2024-11-02 11:41:04.851693] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:04.555 [2024-11-02 11:41:04.851707] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:04.555 [2024-11-02 11:41:04.851719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.851736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.555 [2024-11-02 11:41:04.851758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113480, cid 0, qid 0 00:29:04.555 [2024-11-02 11:41:04.851784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113600, cid 1, qid 0 00:29:04.555 [2024-11-02 11:41:04.851792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113780, cid 2, qid 0 00:29:04.555 [2024-11-02 11:41:04.851800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.555 [2024-11-02 11:41:04.851807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113a80, cid 4, qid 0 00:29:04.555 [2024-11-02 11:41:04.851970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.555 [2024-11-02 11:41:04.851983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.555 [2024-11-02 11:41:04.851989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.851996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113a80) on tqpair=0x20a7d80 00:29:04.555 [2024-11-02 11:41:04.852009] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:04.555 [2024-11-02 11:41:04.852020] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:04.555 [2024-11-02 11:41:04.852037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.852046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.852056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.555 [2024-11-02 11:41:04.852077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113a80, cid 4, qid 0 00:29:04.555 [2024-11-02 11:41:04.852214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.555 [2024-11-02 11:41:04.852229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.555 [2024-11-02 11:41:04.852236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.852242] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20a7d80): datao=0, datal=4096, cccid=4 00:29:04.555 [2024-11-02 11:41:04.852250] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113a80) on tqpair(0x20a7d80): expected_datao=0, payload_size=4096 00:29:04.555 [2024-11-02 11:41:04.852265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.852283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.852291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.555 [2024-11-02 11:41:04.895297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.555 [2024-11-02 11:41:04.895305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113a80) on tqpair=0x20a7d80 00:29:04.555 [2024-11-02 11:41:04.895332] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:04.555 [2024-11-02 11:41:04.895372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.895394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.555 [2024-11-02 11:41:04.895406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.895429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.555 [2024-11-02 11:41:04.895458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113a80, cid 4, qid 0 00:29:04.555 [2024-11-02 11:41:04.895470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113c00, cid 5, qid 0 00:29:04.555 [2024-11-02 11:41:04.895637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.555 [2024-11-02 11:41:04.895652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.555 [2024-11-02 11:41:04.895659] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895665] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20a7d80): datao=0, datal=1024, cccid=4 00:29:04.555 [2024-11-02 11:41:04.895673] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113a80) on tqpair(0x20a7d80): expected_datao=0, payload_size=1024 00:29:04.555 [2024-11-02 11:41:04.895680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895689] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895696] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.555 [2024-11-02 11:41:04.895714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.555 [2024-11-02 11:41:04.895720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.895727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113c00) on tqpair=0x20a7d80 00:29:04.555 [2024-11-02 11:41:04.936376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.555 [2024-11-02 11:41:04.936398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.555 [2024-11-02 11:41:04.936406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113a80) on tqpair=0x20a7d80 00:29:04.555 [2024-11-02 11:41:04.936431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.936451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.555 [2024-11-02 11:41:04.936482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113a80, cid 4, qid 0 00:29:04.555 [2024-11-02 11:41:04.936637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.555 [2024-11-02 11:41:04.936654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.555 [2024-11-02 11:41:04.936661] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936667] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20a7d80): datao=0, datal=3072, cccid=4 00:29:04.555 [2024-11-02 11:41:04.936680] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113a80) on tqpair(0x20a7d80): expected_datao=0, payload_size=3072 00:29:04.555 [2024-11-02 11:41:04.936689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936699] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936706] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.555 [2024-11-02 11:41:04.936729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.555 [2024-11-02 11:41:04.936736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113a80) on tqpair=0x20a7d80 00:29:04.555 [2024-11-02 11:41:04.936758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20a7d80) 00:29:04.555 [2024-11-02 11:41:04.936777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.555 [2024-11-02 11:41:04.936806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113a80, cid 4, qid 0 00:29:04.555 [2024-11-02 11:41:04.936946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.555 [2024-11-02 11:41:04.936958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.555 [2024-11-02 11:41:04.936965] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936971] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20a7d80): datao=0, datal=8, cccid=4 00:29:04.555 [2024-11-02 11:41:04.936979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2113a80) on tqpair(0x20a7d80): expected_datao=0, payload_size=8 00:29:04.555 [2024-11-02 11:41:04.936986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.936996] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.555 [2024-11-02 11:41:04.937002] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.836 [2024-11-02 11:41:04.977361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.836 [2024-11-02 11:41:04.977382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.836 [2024-11-02 11:41:04.977390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.836 [2024-11-02 11:41:04.977397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113a80) on tqpair=0x20a7d80 00:29:04.836 ===================================================== 00:29:04.836 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:04.836 ===================================================== 00:29:04.836 Controller Capabilities/Features 00:29:04.836 ================================ 00:29:04.836 Vendor ID: 0000 00:29:04.836 Subsystem Vendor ID: 0000 00:29:04.836 Serial Number: .................... 00:29:04.836 Model Number: ........................................ 00:29:04.836 Firmware Version: 25.01 00:29:04.836 Recommended Arb Burst: 0 00:29:04.836 IEEE OUI Identifier: 00 00 00 00:29:04.836 Multi-path I/O 00:29:04.836 May have multiple subsystem ports: No 00:29:04.836 May have multiple controllers: No 00:29:04.836 Associated with SR-IOV VF: No 00:29:04.836 Max Data Transfer Size: 131072 00:29:04.836 Max Number of Namespaces: 0 00:29:04.836 Max Number of I/O Queues: 1024 00:29:04.836 NVMe Specification Version (VS): 1.3 00:29:04.836 NVMe Specification Version (Identify): 1.3 00:29:04.836 Maximum Queue Entries: 128 00:29:04.836 Contiguous Queues Required: Yes 00:29:04.836 Arbitration Mechanisms Supported 00:29:04.836 Weighted Round Robin: Not Supported 00:29:04.836 Vendor Specific: Not Supported 00:29:04.836 Reset Timeout: 15000 ms 00:29:04.836 Doorbell Stride: 4 bytes 00:29:04.836 NVM Subsystem Reset: Not Supported 00:29:04.836 Command Sets Supported 00:29:04.836 NVM Command Set: Supported 00:29:04.836 Boot Partition: Not Supported 00:29:04.836 Memory Page Size Minimum: 4096 bytes 00:29:04.836 Memory Page Size Maximum: 4096 bytes 00:29:04.836 Persistent Memory Region: Not Supported 00:29:04.836 Optional Asynchronous Events Supported 00:29:04.836 Namespace Attribute Notices: Not Supported 00:29:04.836 Firmware Activation Notices: Not Supported 00:29:04.836 ANA Change Notices: Not Supported 00:29:04.836 PLE Aggregate Log Change Notices: Not Supported 00:29:04.836 LBA Status Info Alert Notices: Not Supported 00:29:04.836 EGE Aggregate Log Change Notices: Not Supported 00:29:04.836 Normal NVM Subsystem Shutdown event: Not Supported 00:29:04.836 Zone Descriptor Change Notices: Not Supported 00:29:04.836 Discovery Log Change Notices: Supported 00:29:04.836 Controller Attributes 00:29:04.836 128-bit Host Identifier: Not Supported 00:29:04.836 Non-Operational Permissive Mode: Not Supported 00:29:04.836 NVM Sets: Not Supported 00:29:04.836 Read Recovery Levels: Not Supported 00:29:04.836 Endurance Groups: Not Supported 00:29:04.836 Predictable Latency Mode: Not Supported 00:29:04.836 Traffic Based Keep ALive: Not Supported 00:29:04.836 Namespace Granularity: Not Supported 00:29:04.836 SQ Associations: Not Supported 00:29:04.836 UUID List: Not Supported 00:29:04.836 Multi-Domain Subsystem: Not Supported 00:29:04.836 Fixed Capacity Management: Not Supported 00:29:04.836 Variable Capacity Management: Not Supported 00:29:04.836 Delete Endurance Group: Not Supported 00:29:04.836 Delete NVM Set: Not Supported 00:29:04.836 Extended LBA Formats Supported: Not Supported 00:29:04.836 Flexible Data Placement Supported: Not Supported 00:29:04.836 00:29:04.836 Controller Memory Buffer Support 00:29:04.836 ================================ 00:29:04.836 Supported: No 00:29:04.836 00:29:04.836 Persistent Memory Region Support 00:29:04.836 ================================ 00:29:04.836 Supported: No 00:29:04.836 00:29:04.836 Admin Command Set Attributes 00:29:04.836 ============================ 00:29:04.836 Security Send/Receive: Not Supported 00:29:04.836 Format NVM: Not Supported 00:29:04.836 Firmware Activate/Download: Not Supported 00:29:04.836 Namespace Management: Not Supported 00:29:04.836 Device Self-Test: Not Supported 00:29:04.836 Directives: Not Supported 00:29:04.836 NVMe-MI: Not Supported 00:29:04.836 Virtualization Management: Not Supported 00:29:04.836 Doorbell Buffer Config: Not Supported 00:29:04.836 Get LBA Status Capability: Not Supported 00:29:04.836 Command & Feature Lockdown Capability: Not Supported 00:29:04.836 Abort Command Limit: 1 00:29:04.836 Async Event Request Limit: 4 00:29:04.836 Number of Firmware Slots: N/A 00:29:04.836 Firmware Slot 1 Read-Only: N/A 00:29:04.836 Firmware Activation Without Reset: N/A 00:29:04.836 Multiple Update Detection Support: N/A 00:29:04.836 Firmware Update Granularity: No Information Provided 00:29:04.836 Per-Namespace SMART Log: No 00:29:04.836 Asymmetric Namespace Access Log Page: Not Supported 00:29:04.836 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:04.836 Command Effects Log Page: Not Supported 00:29:04.836 Get Log Page Extended Data: Supported 00:29:04.836 Telemetry Log Pages: Not Supported 00:29:04.836 Persistent Event Log Pages: Not Supported 00:29:04.836 Supported Log Pages Log Page: May Support 00:29:04.836 Commands Supported & Effects Log Page: Not Supported 00:29:04.836 Feature Identifiers & Effects Log Page:May Support 00:29:04.836 NVMe-MI Commands & Effects Log Page: May Support 00:29:04.836 Data Area 4 for Telemetry Log: Not Supported 00:29:04.836 Error Log Page Entries Supported: 128 00:29:04.836 Keep Alive: Not Supported 00:29:04.836 00:29:04.836 NVM Command Set Attributes 00:29:04.836 ========================== 00:29:04.836 Submission Queue Entry Size 00:29:04.836 Max: 1 00:29:04.836 Min: 1 00:29:04.836 Completion Queue Entry Size 00:29:04.836 Max: 1 00:29:04.836 Min: 1 00:29:04.836 Number of Namespaces: 0 00:29:04.836 Compare Command: Not Supported 00:29:04.836 Write Uncorrectable Command: Not Supported 00:29:04.836 Dataset Management Command: Not Supported 00:29:04.836 Write Zeroes Command: Not Supported 00:29:04.836 Set Features Save Field: Not Supported 00:29:04.836 Reservations: Not Supported 00:29:04.836 Timestamp: Not Supported 00:29:04.836 Copy: Not Supported 00:29:04.836 Volatile Write Cache: Not Present 00:29:04.836 Atomic Write Unit (Normal): 1 00:29:04.836 Atomic Write Unit (PFail): 1 00:29:04.836 Atomic Compare & Write Unit: 1 00:29:04.836 Fused Compare & Write: Supported 00:29:04.836 Scatter-Gather List 00:29:04.836 SGL Command Set: Supported 00:29:04.836 SGL Keyed: Supported 00:29:04.836 SGL Bit Bucket Descriptor: Not Supported 00:29:04.836 SGL Metadata Pointer: Not Supported 00:29:04.836 Oversized SGL: Not Supported 00:29:04.836 SGL Metadata Address: Not Supported 00:29:04.836 SGL Offset: Supported 00:29:04.836 Transport SGL Data Block: Not Supported 00:29:04.836 Replay Protected Memory Block: Not Supported 00:29:04.836 00:29:04.836 Firmware Slot Information 00:29:04.836 ========================= 00:29:04.836 Active slot: 0 00:29:04.836 00:29:04.836 00:29:04.836 Error Log 00:29:04.836 ========= 00:29:04.836 00:29:04.836 Active Namespaces 00:29:04.836 ================= 00:29:04.836 Discovery Log Page 00:29:04.836 ================== 00:29:04.837 Generation Counter: 2 00:29:04.837 Number of Records: 2 00:29:04.837 Record Format: 0 00:29:04.837 00:29:04.837 Discovery Log Entry 0 00:29:04.837 ---------------------- 00:29:04.837 Transport Type: 3 (TCP) 00:29:04.837 Address Family: 1 (IPv4) 00:29:04.837 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:04.837 Entry Flags: 00:29:04.837 Duplicate Returned Information: 1 00:29:04.837 Explicit Persistent Connection Support for Discovery: 1 00:29:04.837 Transport Requirements: 00:29:04.837 Secure Channel: Not Required 00:29:04.837 Port ID: 0 (0x0000) 00:29:04.837 Controller ID: 65535 (0xffff) 00:29:04.837 Admin Max SQ Size: 128 00:29:04.837 Transport Service Identifier: 4420 00:29:04.837 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:04.837 Transport Address: 10.0.0.2 00:29:04.837 Discovery Log Entry 1 00:29:04.837 ---------------------- 00:29:04.837 Transport Type: 3 (TCP) 00:29:04.837 Address Family: 1 (IPv4) 00:29:04.837 Subsystem Type: 2 (NVM Subsystem) 00:29:04.837 Entry Flags: 00:29:04.837 Duplicate Returned Information: 0 00:29:04.837 Explicit Persistent Connection Support for Discovery: 0 00:29:04.837 Transport Requirements: 00:29:04.837 Secure Channel: Not Required 00:29:04.837 Port ID: 0 (0x0000) 00:29:04.837 Controller ID: 65535 (0xffff) 00:29:04.837 Admin Max SQ Size: 128 00:29:04.837 Transport Service Identifier: 4420 00:29:04.837 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:04.837 Transport Address: 10.0.0.2 [2024-11-02 11:41:04.977512] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:04.837 [2024-11-02 11:41:04.977542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113480) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.977554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.837 [2024-11-02 11:41:04.977564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113600) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.977571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.837 [2024-11-02 11:41:04.977579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113780) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.977587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.837 [2024-11-02 11:41:04.977595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.977603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.837 [2024-11-02 11:41:04.977616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.977628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.977635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.977647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.977686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.977818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.977831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.977838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.977845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.977861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.977871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.977877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.977888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.977914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.978042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.978054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.978060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.978076] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:04.837 [2024-11-02 11:41:04.978084] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:04.837 [2024-11-02 11:41:04.978099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.978125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.978145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.978269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.978282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.978289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.978323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.978349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.978369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.978492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.978507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.978514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.978542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.978568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.978588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.978712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.978727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.978734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.978757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.978783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.978803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.978923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.978938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.978945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.978968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.978984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.978994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.979014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.979129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.979141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.979147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.979154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.979170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.979179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.979185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.837 [2024-11-02 11:41:04.979195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.837 [2024-11-02 11:41:04.979215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.837 [2024-11-02 11:41:04.983267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.837 [2024-11-02 11:41:04.983286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.837 [2024-11-02 11:41:04.983293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.837 [2024-11-02 11:41:04.983300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.837 [2024-11-02 11:41:04.983333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:04.983344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:04.983350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20a7d80) 00:29:04.838 [2024-11-02 11:41:04.983361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:04.983384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2113900, cid 3, qid 0 00:29:04.838 [2024-11-02 11:41:04.983503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:04.983515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:04.983521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:04.983528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2113900) on tqpair=0x20a7d80 00:29:04.838 [2024-11-02 11:41:04.983541] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:04.838 00:29:04.838 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:04.838 [2024-11-02 11:41:05.021508] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:29:04.838 [2024-11-02 11:41:05.021568] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917885 ] 00:29:04.838 [2024-11-02 11:41:05.071813] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:04.838 [2024-11-02 11:41:05.071866] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:04.838 [2024-11-02 11:41:05.071876] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:04.838 [2024-11-02 11:41:05.071892] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:04.838 [2024-11-02 11:41:05.071903] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:04.838 [2024-11-02 11:41:05.075517] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:04.838 [2024-11-02 11:41:05.075571] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd58d80 0 00:29:04.838 [2024-11-02 11:41:05.082265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:04.838 [2024-11-02 11:41:05.082286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:04.838 [2024-11-02 11:41:05.082294] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:04.838 [2024-11-02 11:41:05.082300] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:04.838 [2024-11-02 11:41:05.082343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.082356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.082363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.082377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:04.838 [2024-11-02 11:41:05.082405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.089271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:05.089290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:05.089303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.838 [2024-11-02 11:41:05.089324] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:04.838 [2024-11-02 11:41:05.089349] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:04.838 [2024-11-02 11:41:05.089359] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:04.838 [2024-11-02 11:41:05.089377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.089403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:05.089429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.089578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:05.089593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:05.089600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.838 [2024-11-02 11:41:05.089615] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:04.838 [2024-11-02 11:41:05.089629] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:04.838 [2024-11-02 11:41:05.089641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.089665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:05.089687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.089797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:05.089809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:05.089816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.838 [2024-11-02 11:41:05.089831] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:04.838 [2024-11-02 11:41:05.089844] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:04.838 [2024-11-02 11:41:05.089856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.089869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.089880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:05.089901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.090010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:05.090022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:05.090032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.838 [2024-11-02 11:41:05.090048] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:04.838 [2024-11-02 11:41:05.090068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.090095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:05.090116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.090221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:05.090233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:05.090240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.838 [2024-11-02 11:41:05.090254] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:04.838 [2024-11-02 11:41:05.090273] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:04.838 [2024-11-02 11:41:05.090287] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:04.838 [2024-11-02 11:41:05.090396] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:04.838 [2024-11-02 11:41:05.090405] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:04.838 [2024-11-02 11:41:05.090417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.090440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:05.090462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.090601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.838 [2024-11-02 11:41:05.090616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.838 [2024-11-02 11:41:05.090623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.838 [2024-11-02 11:41:05.090638] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:04.838 [2024-11-02 11:41:05.090654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.838 [2024-11-02 11:41:05.090670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.838 [2024-11-02 11:41:05.090680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.838 [2024-11-02 11:41:05.090701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.838 [2024-11-02 11:41:05.090809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.839 [2024-11-02 11:41:05.090824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.839 [2024-11-02 11:41:05.090835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.090842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.839 [2024-11-02 11:41:05.090850] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:04.839 [2024-11-02 11:41:05.090858] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.090872] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:04.839 [2024-11-02 11:41:05.090885] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.090899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.090906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.090917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.839 [2024-11-02 11:41:05.090938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.839 [2024-11-02 11:41:05.091087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.839 [2024-11-02 11:41:05.091099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.839 [2024-11-02 11:41:05.091106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.091112] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=4096, cccid=0 00:29:04.839 [2024-11-02 11:41:05.091119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4480) on tqpair(0xd58d80): expected_datao=0, payload_size=4096 00:29:04.839 [2024-11-02 11:41:05.091127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.091143] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.091152] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.839 [2024-11-02 11:41:05.131410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.839 [2024-11-02 11:41:05.131418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.839 [2024-11-02 11:41:05.131437] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:04.839 [2024-11-02 11:41:05.131445] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:04.839 [2024-11-02 11:41:05.131453] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:04.839 [2024-11-02 11:41:05.131460] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:04.839 [2024-11-02 11:41:05.131467] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:04.839 [2024-11-02 11:41:05.131475] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.131489] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.131501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.131534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:04.839 [2024-11-02 11:41:05.131560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.839 [2024-11-02 11:41:05.131671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.839 [2024-11-02 11:41:05.131683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.839 [2024-11-02 11:41:05.131690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:04.839 [2024-11-02 11:41:05.131712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.131737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.839 [2024-11-02 11:41:05.131747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.131769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.839 [2024-11-02 11:41:05.131778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.131799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.839 [2024-11-02 11:41:05.131809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.131830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.839 [2024-11-02 11:41:05.131839] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.131853] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.131864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.131871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.131881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.839 [2024-11-02 11:41:05.131904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4480, cid 0, qid 0 00:29:04.839 [2024-11-02 11:41:05.131915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4600, cid 1, qid 0 00:29:04.839 [2024-11-02 11:41:05.131923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4780, cid 2, qid 0 00:29:04.839 [2024-11-02 11:41:05.131930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:04.839 [2024-11-02 11:41:05.131938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:04.839 [2024-11-02 11:41:05.132104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.839 [2024-11-02 11:41:05.132116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.839 [2024-11-02 11:41:05.132127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:04.839 [2024-11-02 11:41:05.132146] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:04.839 [2024-11-02 11:41:05.132156] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.132170] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.132181] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.132191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.132215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:04.839 [2024-11-02 11:41:05.132236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:04.839 [2024-11-02 11:41:05.132380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.839 [2024-11-02 11:41:05.132397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.839 [2024-11-02 11:41:05.132404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:04.839 [2024-11-02 11:41:05.132479] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.132499] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:04.839 [2024-11-02 11:41:05.132513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:04.839 [2024-11-02 11:41:05.132531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.839 [2024-11-02 11:41:05.132554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:04.839 [2024-11-02 11:41:05.132683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.839 [2024-11-02 11:41:05.132695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.839 [2024-11-02 11:41:05.132702] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132709] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=4096, cccid=4 00:29:04.839 [2024-11-02 11:41:05.132716] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4a80) on tqpair(0xd58d80): expected_datao=0, payload_size=4096 00:29:04.839 [2024-11-02 11:41:05.132723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.132749] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.839 [2024-11-02 11:41:05.177272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.839 [2024-11-02 11:41:05.177291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.839 [2024-11-02 11:41:05.177298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.177306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:04.840 [2024-11-02 11:41:05.177321] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:04.840 [2024-11-02 11:41:05.177341] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:04.840 [2024-11-02 11:41:05.177360] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:04.840 [2024-11-02 11:41:05.177373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.177381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:04.840 [2024-11-02 11:41:05.177392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.840 [2024-11-02 11:41:05.177416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:04.840 [2024-11-02 11:41:05.177607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.840 [2024-11-02 11:41:05.177623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.840 [2024-11-02 11:41:05.177630] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.177636] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=4096, cccid=4 00:29:04.840 [2024-11-02 11:41:05.177644] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4a80) on tqpair(0xd58d80): expected_datao=0, payload_size=4096 00:29:04.840 [2024-11-02 11:41:05.177651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.177661] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.177669] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.218389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:04.840 [2024-11-02 11:41:05.218410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:04.840 [2024-11-02 11:41:05.218418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.218425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:04.840 [2024-11-02 11:41:05.218447] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:04.840 [2024-11-02 11:41:05.218467] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:04.840 [2024-11-02 11:41:05.218481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.218489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:04.840 [2024-11-02 11:41:05.218500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.840 [2024-11-02 11:41:05.218524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:04.840 [2024-11-02 11:41:05.218651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:04.840 [2024-11-02 11:41:05.218663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:04.840 [2024-11-02 11:41:05.218669] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.218676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=4096, cccid=4 00:29:04.840 [2024-11-02 11:41:05.218683] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4a80) on tqpair(0xd58d80): expected_datao=0, payload_size=4096 00:29:04.840 [2024-11-02 11:41:05.218691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.218707] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:04.840 [2024-11-02 11:41:05.218716] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.103 [2024-11-02 11:41:05.259410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.103 [2024-11-02 11:41:05.259419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:05.103 [2024-11-02 11:41:05.259440] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259455] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259471] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259483] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259492] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259500] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259509] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:05.103 [2024-11-02 11:41:05.259517] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:05.103 [2024-11-02 11:41:05.259525] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:05.103 [2024-11-02 11:41:05.259544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.259564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.259575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.259597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:05.103 [2024-11-02 11:41:05.259625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:05.103 [2024-11-02 11:41:05.259637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4c00, cid 5, qid 0 00:29:05.103 [2024-11-02 11:41:05.259759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.103 [2024-11-02 11:41:05.259774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.103 [2024-11-02 11:41:05.259781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:05.103 [2024-11-02 11:41:05.259798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.103 [2024-11-02 11:41:05.259808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.103 [2024-11-02 11:41:05.259814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4c00) on tqpair=0xd58d80 00:29:05.103 [2024-11-02 11:41:05.259837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.259846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.259856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.259882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4c00, cid 5, qid 0 00:29:05.103 [2024-11-02 11:41:05.260041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.103 [2024-11-02 11:41:05.260053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.103 [2024-11-02 11:41:05.260060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4c00) on tqpair=0xd58d80 00:29:05.103 [2024-11-02 11:41:05.260082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.260101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.260122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4c00, cid 5, qid 0 00:29:05.103 [2024-11-02 11:41:05.260233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.103 [2024-11-02 11:41:05.260248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.103 [2024-11-02 11:41:05.260261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4c00) on tqpair=0xd58d80 00:29:05.103 [2024-11-02 11:41:05.260286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.260305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.260327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4c00, cid 5, qid 0 00:29:05.103 [2024-11-02 11:41:05.260434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.103 [2024-11-02 11:41:05.260446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.103 [2024-11-02 11:41:05.260452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4c00) on tqpair=0xd58d80 00:29:05.103 [2024-11-02 11:41:05.260482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.260504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.260516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.260533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.260544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.260560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.260576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.103 [2024-11-02 11:41:05.260584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd58d80) 00:29:05.103 [2024-11-02 11:41:05.260593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.103 [2024-11-02 11:41:05.260616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4c00, cid 5, qid 0 00:29:05.104 [2024-11-02 11:41:05.260630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4a80, cid 4, qid 0 00:29:05.104 [2024-11-02 11:41:05.260639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4d80, cid 6, qid 0 00:29:05.104 [2024-11-02 11:41:05.260646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4f00, cid 7, qid 0 00:29:05.104 [2024-11-02 11:41:05.260868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.104 [2024-11-02 11:41:05.260883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.104 [2024-11-02 11:41:05.260890] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.260896] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=8192, cccid=5 00:29:05.104 [2024-11-02 11:41:05.260903] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4c00) on tqpair(0xd58d80): expected_datao=0, payload_size=8192 00:29:05.104 [2024-11-02 11:41:05.260911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261001] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261013] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.104 [2024-11-02 11:41:05.261030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.104 [2024-11-02 11:41:05.261036] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261043] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=512, cccid=4 00:29:05.104 [2024-11-02 11:41:05.261050] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4a80) on tqpair(0xd58d80): expected_datao=0, payload_size=512 00:29:05.104 [2024-11-02 11:41:05.261057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261066] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261073] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.104 [2024-11-02 11:41:05.261090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.104 [2024-11-02 11:41:05.261096] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261102] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=512, cccid=6 00:29:05.104 [2024-11-02 11:41:05.261109] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4d80) on tqpair(0xd58d80): expected_datao=0, payload_size=512 00:29:05.104 [2024-11-02 11:41:05.261116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261132] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:05.104 [2024-11-02 11:41:05.261148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:05.104 [2024-11-02 11:41:05.261154] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261160] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd58d80): datao=0, datal=4096, cccid=7 00:29:05.104 [2024-11-02 11:41:05.261167] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc4f00) on tqpair(0xd58d80): expected_datao=0, payload_size=4096 00:29:05.104 [2024-11-02 11:41:05.261174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261183] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261191] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.104 [2024-11-02 11:41:05.261211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.104 [2024-11-02 11:41:05.261217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.261230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4c00) on tqpair=0xd58d80 00:29:05.104 [2024-11-02 11:41:05.261249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.104 [2024-11-02 11:41:05.265271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.104 [2024-11-02 11:41:05.265281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.265288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4a80) on tqpair=0xd58d80 00:29:05.104 [2024-11-02 11:41:05.265304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.104 [2024-11-02 11:41:05.265315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.104 [2024-11-02 11:41:05.265321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.265327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4d80) on tqpair=0xd58d80 00:29:05.104 [2024-11-02 11:41:05.265337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.104 [2024-11-02 11:41:05.265346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.104 [2024-11-02 11:41:05.265352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.104 [2024-11-02 11:41:05.265359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4f00) on tqpair=0xd58d80 00:29:05.104 ===================================================== 00:29:05.104 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.104 ===================================================== 00:29:05.104 Controller Capabilities/Features 00:29:05.104 ================================ 00:29:05.104 Vendor ID: 8086 00:29:05.104 Subsystem Vendor ID: 8086 00:29:05.104 Serial Number: SPDK00000000000001 00:29:05.104 Model Number: SPDK bdev Controller 00:29:05.104 Firmware Version: 25.01 00:29:05.104 Recommended Arb Burst: 6 00:29:05.104 IEEE OUI Identifier: e4 d2 5c 00:29:05.104 Multi-path I/O 00:29:05.104 May have multiple subsystem ports: Yes 00:29:05.104 May have multiple controllers: Yes 00:29:05.104 Associated with SR-IOV VF: No 00:29:05.104 Max Data Transfer Size: 131072 00:29:05.104 Max Number of Namespaces: 32 00:29:05.104 Max Number of I/O Queues: 127 00:29:05.104 NVMe Specification Version (VS): 1.3 00:29:05.104 NVMe Specification Version (Identify): 1.3 00:29:05.104 Maximum Queue Entries: 128 00:29:05.104 Contiguous Queues Required: Yes 00:29:05.104 Arbitration Mechanisms Supported 00:29:05.104 Weighted Round Robin: Not Supported 00:29:05.104 Vendor Specific: Not Supported 00:29:05.104 Reset Timeout: 15000 ms 00:29:05.104 Doorbell Stride: 4 bytes 00:29:05.104 NVM Subsystem Reset: Not Supported 00:29:05.104 Command Sets Supported 00:29:05.104 NVM Command Set: Supported 00:29:05.104 Boot Partition: Not Supported 00:29:05.104 Memory Page Size Minimum: 4096 bytes 00:29:05.104 Memory Page Size Maximum: 4096 bytes 00:29:05.104 Persistent Memory Region: Not Supported 00:29:05.104 Optional Asynchronous Events Supported 00:29:05.104 Namespace Attribute Notices: Supported 00:29:05.104 Firmware Activation Notices: Not Supported 00:29:05.104 ANA Change Notices: Not Supported 00:29:05.104 PLE Aggregate Log Change Notices: Not Supported 00:29:05.104 LBA Status Info Alert Notices: Not Supported 00:29:05.104 EGE Aggregate Log Change Notices: Not Supported 00:29:05.104 Normal NVM Subsystem Shutdown event: Not Supported 00:29:05.104 Zone Descriptor Change Notices: Not Supported 00:29:05.104 Discovery Log Change Notices: Not Supported 00:29:05.104 Controller Attributes 00:29:05.104 128-bit Host Identifier: Supported 00:29:05.104 Non-Operational Permissive Mode: Not Supported 00:29:05.104 NVM Sets: Not Supported 00:29:05.104 Read Recovery Levels: Not Supported 00:29:05.104 Endurance Groups: Not Supported 00:29:05.104 Predictable Latency Mode: Not Supported 00:29:05.104 Traffic Based Keep ALive: Not Supported 00:29:05.104 Namespace Granularity: Not Supported 00:29:05.104 SQ Associations: Not Supported 00:29:05.104 UUID List: Not Supported 00:29:05.104 Multi-Domain Subsystem: Not Supported 00:29:05.104 Fixed Capacity Management: Not Supported 00:29:05.104 Variable Capacity Management: Not Supported 00:29:05.104 Delete Endurance Group: Not Supported 00:29:05.104 Delete NVM Set: Not Supported 00:29:05.104 Extended LBA Formats Supported: Not Supported 00:29:05.104 Flexible Data Placement Supported: Not Supported 00:29:05.104 00:29:05.104 Controller Memory Buffer Support 00:29:05.104 ================================ 00:29:05.104 Supported: No 00:29:05.104 00:29:05.104 Persistent Memory Region Support 00:29:05.104 ================================ 00:29:05.104 Supported: No 00:29:05.104 00:29:05.104 Admin Command Set Attributes 00:29:05.104 ============================ 00:29:05.104 Security Send/Receive: Not Supported 00:29:05.104 Format NVM: Not Supported 00:29:05.104 Firmware Activate/Download: Not Supported 00:29:05.104 Namespace Management: Not Supported 00:29:05.104 Device Self-Test: Not Supported 00:29:05.104 Directives: Not Supported 00:29:05.104 NVMe-MI: Not Supported 00:29:05.104 Virtualization Management: Not Supported 00:29:05.104 Doorbell Buffer Config: Not Supported 00:29:05.104 Get LBA Status Capability: Not Supported 00:29:05.104 Command & Feature Lockdown Capability: Not Supported 00:29:05.104 Abort Command Limit: 4 00:29:05.104 Async Event Request Limit: 4 00:29:05.104 Number of Firmware Slots: N/A 00:29:05.104 Firmware Slot 1 Read-Only: N/A 00:29:05.104 Firmware Activation Without Reset: N/A 00:29:05.104 Multiple Update Detection Support: N/A 00:29:05.104 Firmware Update Granularity: No Information Provided 00:29:05.104 Per-Namespace SMART Log: No 00:29:05.104 Asymmetric Namespace Access Log Page: Not Supported 00:29:05.104 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:05.104 Command Effects Log Page: Supported 00:29:05.104 Get Log Page Extended Data: Supported 00:29:05.104 Telemetry Log Pages: Not Supported 00:29:05.104 Persistent Event Log Pages: Not Supported 00:29:05.104 Supported Log Pages Log Page: May Support 00:29:05.104 Commands Supported & Effects Log Page: Not Supported 00:29:05.104 Feature Identifiers & Effects Log Page:May Support 00:29:05.104 NVMe-MI Commands & Effects Log Page: May Support 00:29:05.104 Data Area 4 for Telemetry Log: Not Supported 00:29:05.105 Error Log Page Entries Supported: 128 00:29:05.105 Keep Alive: Supported 00:29:05.105 Keep Alive Granularity: 10000 ms 00:29:05.105 00:29:05.105 NVM Command Set Attributes 00:29:05.105 ========================== 00:29:05.105 Submission Queue Entry Size 00:29:05.105 Max: 64 00:29:05.105 Min: 64 00:29:05.105 Completion Queue Entry Size 00:29:05.105 Max: 16 00:29:05.105 Min: 16 00:29:05.105 Number of Namespaces: 32 00:29:05.105 Compare Command: Supported 00:29:05.105 Write Uncorrectable Command: Not Supported 00:29:05.105 Dataset Management Command: Supported 00:29:05.105 Write Zeroes Command: Supported 00:29:05.105 Set Features Save Field: Not Supported 00:29:05.105 Reservations: Supported 00:29:05.105 Timestamp: Not Supported 00:29:05.105 Copy: Supported 00:29:05.105 Volatile Write Cache: Present 00:29:05.105 Atomic Write Unit (Normal): 1 00:29:05.105 Atomic Write Unit (PFail): 1 00:29:05.105 Atomic Compare & Write Unit: 1 00:29:05.105 Fused Compare & Write: Supported 00:29:05.105 Scatter-Gather List 00:29:05.105 SGL Command Set: Supported 00:29:05.105 SGL Keyed: Supported 00:29:05.105 SGL Bit Bucket Descriptor: Not Supported 00:29:05.105 SGL Metadata Pointer: Not Supported 00:29:05.105 Oversized SGL: Not Supported 00:29:05.105 SGL Metadata Address: Not Supported 00:29:05.105 SGL Offset: Supported 00:29:05.105 Transport SGL Data Block: Not Supported 00:29:05.105 Replay Protected Memory Block: Not Supported 00:29:05.105 00:29:05.105 Firmware Slot Information 00:29:05.105 ========================= 00:29:05.105 Active slot: 1 00:29:05.105 Slot 1 Firmware Revision: 25.01 00:29:05.105 00:29:05.105 00:29:05.105 Commands Supported and Effects 00:29:05.105 ============================== 00:29:05.105 Admin Commands 00:29:05.105 -------------- 00:29:05.105 Get Log Page (02h): Supported 00:29:05.105 Identify (06h): Supported 00:29:05.105 Abort (08h): Supported 00:29:05.105 Set Features (09h): Supported 00:29:05.105 Get Features (0Ah): Supported 00:29:05.105 Asynchronous Event Request (0Ch): Supported 00:29:05.105 Keep Alive (18h): Supported 00:29:05.105 I/O Commands 00:29:05.105 ------------ 00:29:05.105 Flush (00h): Supported LBA-Change 00:29:05.105 Write (01h): Supported LBA-Change 00:29:05.105 Read (02h): Supported 00:29:05.105 Compare (05h): Supported 00:29:05.105 Write Zeroes (08h): Supported LBA-Change 00:29:05.105 Dataset Management (09h): Supported LBA-Change 00:29:05.105 Copy (19h): Supported LBA-Change 00:29:05.105 00:29:05.105 Error Log 00:29:05.105 ========= 00:29:05.105 00:29:05.105 Arbitration 00:29:05.105 =========== 00:29:05.105 Arbitration Burst: 1 00:29:05.105 00:29:05.105 Power Management 00:29:05.105 ================ 00:29:05.105 Number of Power States: 1 00:29:05.105 Current Power State: Power State #0 00:29:05.105 Power State #0: 00:29:05.105 Max Power: 0.00 W 00:29:05.105 Non-Operational State: Operational 00:29:05.105 Entry Latency: Not Reported 00:29:05.105 Exit Latency: Not Reported 00:29:05.105 Relative Read Throughput: 0 00:29:05.105 Relative Read Latency: 0 00:29:05.105 Relative Write Throughput: 0 00:29:05.105 Relative Write Latency: 0 00:29:05.105 Idle Power: Not Reported 00:29:05.105 Active Power: Not Reported 00:29:05.105 Non-Operational Permissive Mode: Not Supported 00:29:05.105 00:29:05.105 Health Information 00:29:05.105 ================== 00:29:05.105 Critical Warnings: 00:29:05.105 Available Spare Space: OK 00:29:05.105 Temperature: OK 00:29:05.105 Device Reliability: OK 00:29:05.105 Read Only: No 00:29:05.105 Volatile Memory Backup: OK 00:29:05.105 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:05.105 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:05.105 Available Spare: 0% 00:29:05.105 Available Spare Threshold: 0% 00:29:05.105 Life Percentage Used:[2024-11-02 11:41:05.265486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.265498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd58d80) 00:29:05.105 [2024-11-02 11:41:05.265510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.105 [2024-11-02 11:41:05.265534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4f00, cid 7, qid 0 00:29:05.105 [2024-11-02 11:41:05.265694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.105 [2024-11-02 11:41:05.265707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.105 [2024-11-02 11:41:05.265714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.265720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4f00) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.265765] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:05.105 [2024-11-02 11:41:05.265785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4480) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.265796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.105 [2024-11-02 11:41:05.265804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4600) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.265812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.105 [2024-11-02 11:41:05.265820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4780) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.265827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.105 [2024-11-02 11:41:05.265835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.265842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.105 [2024-11-02 11:41:05.265855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.265862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.265869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.105 [2024-11-02 11:41:05.265879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.105 [2024-11-02 11:41:05.265902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.105 [2024-11-02 11:41:05.266042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.105 [2024-11-02 11:41:05.266057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.105 [2024-11-02 11:41:05.266064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.266082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.105 [2024-11-02 11:41:05.266106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.105 [2024-11-02 11:41:05.266132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.105 [2024-11-02 11:41:05.266272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.105 [2024-11-02 11:41:05.266288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.105 [2024-11-02 11:41:05.266294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.266309] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:05.105 [2024-11-02 11:41:05.266316] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:05.105 [2024-11-02 11:41:05.266332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.105 [2024-11-02 11:41:05.266358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.105 [2024-11-02 11:41:05.266379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.105 [2024-11-02 11:41:05.266490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.105 [2024-11-02 11:41:05.266502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.105 [2024-11-02 11:41:05.266509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.266531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.105 [2024-11-02 11:41:05.266557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.105 [2024-11-02 11:41:05.266578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.105 [2024-11-02 11:41:05.266688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.105 [2024-11-02 11:41:05.266699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.105 [2024-11-02 11:41:05.266706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.105 [2024-11-02 11:41:05.266728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.105 [2024-11-02 11:41:05.266737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.266743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.266754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.266779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.266889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.266901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.266907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.266914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.266929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.266938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.266945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.266955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.266975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.267087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.267101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.267108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.267131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.267157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.267177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.267295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.267310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.267317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.267341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.267367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.267388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.267492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.267504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.267510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.267533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.267558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.267583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.267685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.267697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.267703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.267725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.267751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.267771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.267879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.267894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.267901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.267924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.267940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.267950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.267971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.268082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.268097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.268104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.268126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.268152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.268173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.268279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.268292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.268299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.268322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.268348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.268369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.268477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.268492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.268499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.268522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.268548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.268569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.268671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.268683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.268690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.268712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.268737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.268757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.268863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.268875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.268882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.268904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.268920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.106 [2024-11-02 11:41:05.268930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.106 [2024-11-02 11:41:05.268950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.106 [2024-11-02 11:41:05.269057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.106 [2024-11-02 11:41:05.269071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.106 [2024-11-02 11:41:05.269078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.269085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.106 [2024-11-02 11:41:05.269101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.106 [2024-11-02 11:41:05.269111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.107 [2024-11-02 11:41:05.269117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.107 [2024-11-02 11:41:05.269128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.107 [2024-11-02 11:41:05.269149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.107 [2024-11-02 11:41:05.273263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.107 [2024-11-02 11:41:05.273285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.107 [2024-11-02 11:41:05.273293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.107 [2024-11-02 11:41:05.273300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.107 [2024-11-02 11:41:05.273318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:05.107 [2024-11-02 11:41:05.273328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:05.107 [2024-11-02 11:41:05.273334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd58d80) 00:29:05.107 [2024-11-02 11:41:05.273345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.107 [2024-11-02 11:41:05.273367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc4900, cid 3, qid 0 00:29:05.107 [2024-11-02 11:41:05.273508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:05.107 [2024-11-02 11:41:05.273520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:05.107 [2024-11-02 11:41:05.273527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:05.107 [2024-11-02 11:41:05.273534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc4900) on tqpair=0xd58d80 00:29:05.107 [2024-11-02 11:41:05.273546] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:29:05.107 0% 00:29:05.107 Data Units Read: 0 00:29:05.107 Data Units Written: 0 00:29:05.107 Host Read Commands: 0 00:29:05.107 Host Write Commands: 0 00:29:05.107 Controller Busy Time: 0 minutes 00:29:05.107 Power Cycles: 0 00:29:05.107 Power On Hours: 0 hours 00:29:05.107 Unsafe Shutdowns: 0 00:29:05.107 Unrecoverable Media Errors: 0 00:29:05.107 Lifetime Error Log Entries: 0 00:29:05.107 Warning Temperature Time: 0 minutes 00:29:05.107 Critical Temperature Time: 0 minutes 00:29:05.107 00:29:05.107 Number of Queues 00:29:05.107 ================ 00:29:05.107 Number of I/O Submission Queues: 127 00:29:05.107 Number of I/O Completion Queues: 127 00:29:05.107 00:29:05.107 Active Namespaces 00:29:05.107 ================= 00:29:05.107 Namespace ID:1 00:29:05.107 Error Recovery Timeout: Unlimited 00:29:05.107 Command Set Identifier: NVM (00h) 00:29:05.107 Deallocate: Supported 00:29:05.107 Deallocated/Unwritten Error: Not Supported 00:29:05.107 Deallocated Read Value: Unknown 00:29:05.107 Deallocate in Write Zeroes: Not Supported 00:29:05.107 Deallocated Guard Field: 0xFFFF 00:29:05.107 Flush: Supported 00:29:05.107 Reservation: Supported 00:29:05.107 Namespace Sharing Capabilities: Multiple Controllers 00:29:05.107 Size (in LBAs): 131072 (0GiB) 00:29:05.107 Capacity (in LBAs): 131072 (0GiB) 00:29:05.107 Utilization (in LBAs): 131072 (0GiB) 00:29:05.107 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:05.107 EUI64: ABCDEF0123456789 00:29:05.107 UUID: b98313d0-12fd-4058-8cc6-4d474fcf5474 00:29:05.107 Thin Provisioning: Not Supported 00:29:05.107 Per-NS Atomic Units: Yes 00:29:05.107 Atomic Boundary Size (Normal): 0 00:29:05.107 Atomic Boundary Size (PFail): 0 00:29:05.107 Atomic Boundary Offset: 0 00:29:05.107 Maximum Single Source Range Length: 65535 00:29:05.107 Maximum Copy Length: 65535 00:29:05.107 Maximum Source Range Count: 1 00:29:05.107 NGUID/EUI64 Never Reused: No 00:29:05.107 Namespace Write Protected: No 00:29:05.107 Number of LBA Formats: 1 00:29:05.107 Current LBA Format: LBA Format #00 00:29:05.107 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:05.107 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.107 rmmod nvme_tcp 00:29:05.107 rmmod nvme_fabrics 00:29:05.107 rmmod nvme_keyring 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3917734 ']' 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3917734 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3917734 ']' 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3917734 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3917734 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3917734' 00:29:05.107 killing process with pid 3917734 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3917734 00:29:05.107 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3917734 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.367 11:41:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.274 11:41:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.533 00:29:07.533 real 0m5.852s 00:29:07.533 user 0m5.641s 00:29:07.533 sys 0m2.059s 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:07.533 ************************************ 00:29:07.533 END TEST nvmf_identify 00:29:07.533 ************************************ 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.533 ************************************ 00:29:07.533 START TEST nvmf_perf 00:29:07.533 ************************************ 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:07.533 * Looking for test storage... 00:29:07.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:07.533 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.534 --rc genhtml_branch_coverage=1 00:29:07.534 --rc genhtml_function_coverage=1 00:29:07.534 --rc genhtml_legend=1 00:29:07.534 --rc geninfo_all_blocks=1 00:29:07.534 --rc geninfo_unexecuted_blocks=1 00:29:07.534 00:29:07.534 ' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.534 --rc genhtml_branch_coverage=1 00:29:07.534 --rc genhtml_function_coverage=1 00:29:07.534 --rc genhtml_legend=1 00:29:07.534 --rc geninfo_all_blocks=1 00:29:07.534 --rc geninfo_unexecuted_blocks=1 00:29:07.534 00:29:07.534 ' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.534 --rc genhtml_branch_coverage=1 00:29:07.534 --rc genhtml_function_coverage=1 00:29:07.534 --rc genhtml_legend=1 00:29:07.534 --rc geninfo_all_blocks=1 00:29:07.534 --rc geninfo_unexecuted_blocks=1 00:29:07.534 00:29:07.534 ' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.534 --rc genhtml_branch_coverage=1 00:29:07.534 --rc genhtml_function_coverage=1 00:29:07.534 --rc genhtml_legend=1 00:29:07.534 --rc geninfo_all_blocks=1 00:29:07.534 --rc geninfo_unexecuted_blocks=1 00:29:07.534 00:29:07.534 ' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.534 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.535 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.535 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.535 11:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.066 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.067 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.067 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.067 11:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:29:10.067 00:29:10.067 --- 10.0.0.2 ping statistics --- 00:29:10.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.067 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:29:10.067 00:29:10.067 --- 10.0.0.1 ping statistics --- 00:29:10.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.067 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3919832 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3919832 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3919832 ']' 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:10.067 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.067 [2024-11-02 11:41:10.236091] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:29:10.067 [2024-11-02 11:41:10.236180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.067 [2024-11-02 11:41:10.312229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.067 [2024-11-02 11:41:10.360820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.067 [2024-11-02 11:41:10.360873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.067 [2024-11-02 11:41:10.360896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.067 [2024-11-02 11:41:10.360907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.067 [2024-11-02 11:41:10.360916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.067 [2024-11-02 11:41:10.362519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.067 [2024-11-02 11:41:10.362578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.067 [2024-11-02 11:41:10.362645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.067 [2024-11-02 11:41:10.362648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:10.325 11:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:13.615 11:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:13.615 11:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:13.615 11:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:13.615 11:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:13.873 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:13.873 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:13.873 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:13.873 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:13.873 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:14.131 [2024-11-02 11:41:14.451535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.131 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.389 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:14.389 11:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.647 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:14.647 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:14.906 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.166 [2024-11-02 11:41:15.547557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.166 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.733 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:15.733 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:15.733 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:15.733 11:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:16.670 Initializing NVMe Controllers 00:29:16.670 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:16.670 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:16.670 Initialization complete. Launching workers. 00:29:16.670 ======================================================== 00:29:16.670 Latency(us) 00:29:16.670 Device Information : IOPS MiB/s Average min max 00:29:16.670 PCIE (0000:88:00.0) NSID 1 from core 0: 86452.54 337.71 369.62 36.80 4323.19 00:29:16.670 ======================================================== 00:29:16.670 Total : 86452.54 337.71 369.62 36.80 4323.19 00:29:16.670 00:29:16.670 11:41:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.575 Initializing NVMe Controllers 00:29:18.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:18.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:18.575 Initialization complete. Launching workers. 00:29:18.575 ======================================================== 00:29:18.575 Latency(us) 00:29:18.575 Device Information : IOPS MiB/s Average min max 00:29:18.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.63 0.40 9649.30 176.53 46078.37 00:29:18.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.80 0.22 17744.76 6983.61 47905.58 00:29:18.575 ======================================================== 00:29:18.575 Total : 160.43 0.63 12515.40 176.53 47905.58 00:29:18.575 00:29:18.575 11:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.513 Initializing NVMe Controllers 00:29:19.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:19.513 Initialization complete. Launching workers. 00:29:19.513 ======================================================== 00:29:19.513 Latency(us) 00:29:19.513 Device Information : IOPS MiB/s Average min max 00:29:19.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8181.30 31.96 3911.57 445.78 9357.74 00:29:19.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3912.02 15.28 8203.17 5179.15 15991.37 00:29:19.513 ======================================================== 00:29:19.513 Total : 12093.32 47.24 5299.84 445.78 15991.37 00:29:19.513 00:29:19.770 11:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:19.770 11:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:19.771 11:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:22.309 Initializing NVMe Controllers 00:29:22.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.309 Controller IO queue size 128, less than required. 00:29:22.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.309 Controller IO queue size 128, less than required. 00:29:22.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:22.309 Initialization complete. Launching workers. 00:29:22.309 ======================================================== 00:29:22.309 Latency(us) 00:29:22.309 Device Information : IOPS MiB/s Average min max 00:29:22.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 997.84 249.46 133430.59 81809.33 182796.24 00:29:22.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 562.85 140.71 231988.25 71795.13 357970.58 00:29:22.309 ======================================================== 00:29:22.309 Total : 1560.69 390.17 168974.42 71795.13 357970.58 00:29:22.309 00:29:22.309 11:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:22.567 No valid NVMe controllers or AIO or URING devices found 00:29:22.568 Initializing NVMe Controllers 00:29:22.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.568 Controller IO queue size 128, less than required. 00:29:22.568 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.568 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:22.568 Controller IO queue size 128, less than required. 00:29:22.568 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.568 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:22.568 WARNING: Some requested NVMe devices were skipped 00:29:22.568 11:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:25.100 Initializing NVMe Controllers 00:29:25.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.100 Controller IO queue size 128, less than required. 00:29:25.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.100 Controller IO queue size 128, less than required. 00:29:25.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:25.100 Initialization complete. Launching workers. 00:29:25.100 00:29:25.100 ==================== 00:29:25.100 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:25.100 TCP transport: 00:29:25.100 polls: 18000 00:29:25.100 idle_polls: 9033 00:29:25.100 sock_completions: 8967 00:29:25.100 nvme_completions: 4833 00:29:25.100 submitted_requests: 7250 00:29:25.100 queued_requests: 1 00:29:25.100 00:29:25.100 ==================== 00:29:25.100 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:25.100 TCP transport: 00:29:25.100 polls: 15181 00:29:25.100 idle_polls: 6098 00:29:25.100 sock_completions: 9083 00:29:25.100 nvme_completions: 5285 00:29:25.100 submitted_requests: 7930 00:29:25.100 queued_requests: 1 00:29:25.100 ======================================================== 00:29:25.100 Latency(us) 00:29:25.100 Device Information : IOPS MiB/s Average min max 00:29:25.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1207.63 301.91 108635.30 56987.97 177603.90 00:29:25.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1320.60 330.15 98209.04 55700.00 137865.13 00:29:25.100 ======================================================== 00:29:25.100 Total : 2528.24 632.06 103189.24 55700.00 177603.90 00:29:25.100 00:29:25.358 11:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:25.358 11:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.617 11:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:25.617 11:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:25.617 11:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d2e0cf39-c033-435c-87de-704787a7a41b 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d2e0cf39-c033-435c-87de-704787a7a41b 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=d2e0cf39-c033-435c-87de-704787a7a41b 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:29:28.905 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:29:29.163 { 00:29:29.163 "uuid": "d2e0cf39-c033-435c-87de-704787a7a41b", 00:29:29.163 "name": "lvs_0", 00:29:29.163 "base_bdev": "Nvme0n1", 00:29:29.163 "total_data_clusters": 238234, 00:29:29.163 "free_clusters": 238234, 00:29:29.163 "block_size": 512, 00:29:29.163 "cluster_size": 4194304 00:29:29.163 } 00:29:29.163 ]' 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="d2e0cf39-c033-435c-87de-704787a7a41b") .free_clusters' 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=238234 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="d2e0cf39-c033-435c-87de-704787a7a41b") .cluster_size' 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=952936 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 952936 00:29:29.163 952936 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:29.163 11:41:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2e0cf39-c033-435c-87de-704787a7a41b lbd_0 20480 00:29:29.733 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=47cab8e0-5ee3-4772-9e78-dd4545bc8ad3 00:29:29.734 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 47cab8e0-5ee3-4772-9e78-dd4545bc8ad3 lvs_n_0 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:29:30.674 11:41:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:29:30.932 { 00:29:30.932 "uuid": "d2e0cf39-c033-435c-87de-704787a7a41b", 00:29:30.932 "name": "lvs_0", 00:29:30.932 "base_bdev": "Nvme0n1", 00:29:30.932 "total_data_clusters": 238234, 00:29:30.932 "free_clusters": 233114, 00:29:30.932 "block_size": 512, 00:29:30.932 "cluster_size": 4194304 00:29:30.932 }, 00:29:30.932 { 00:29:30.932 "uuid": "b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d", 00:29:30.932 "name": "lvs_n_0", 00:29:30.932 "base_bdev": "47cab8e0-5ee3-4772-9e78-dd4545bc8ad3", 00:29:30.932 "total_data_clusters": 5114, 00:29:30.932 "free_clusters": 5114, 00:29:30.932 "block_size": 512, 00:29:30.932 "cluster_size": 4194304 00:29:30.932 } 00:29:30.932 ]' 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d") .free_clusters' 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d") .cluster_size' 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:29:30.932 20456 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:30.932 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b1ae1d6b-3bd4-4d37-8fe6-47a68e2a125d lbd_nest_0 20456 00:29:31.190 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f246520e-2f8a-4f5d-a886-3e4227b3c3c7 00:29:31.190 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.449 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:31.449 11:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f246520e-2f8a-4f5d-a886-3e4227b3c3c7 00:29:31.707 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.967 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:31.967 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:31.967 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:31.967 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:31.967 11:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:44.176 Initializing NVMe Controllers 00:29:44.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.176 Initialization complete. Launching workers. 00:29:44.176 ======================================================== 00:29:44.176 Latency(us) 00:29:44.176 Device Information : IOPS MiB/s Average min max 00:29:44.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.08 0.02 21255.31 205.74 46105.17 00:29:44.176 ======================================================== 00:29:44.176 Total : 47.08 0.02 21255.31 205.74 46105.17 00:29:44.177 00:29:44.177 11:41:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:44.177 11:41:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.158 Initializing NVMe Controllers 00:29:54.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.158 Initialization complete. Launching workers. 00:29:54.158 ======================================================== 00:29:54.158 Latency(us) 00:29:54.158 Device Information : IOPS MiB/s Average min max 00:29:54.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.17 9.27 13481.23 5330.94 47902.75 00:29:54.158 ======================================================== 00:29:54.158 Total : 74.17 9.27 13481.23 5330.94 47902.75 00:29:54.158 00:29:54.158 11:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:54.158 11:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:54.158 11:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.184 Initializing NVMe Controllers 00:30:04.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.184 Initialization complete. Launching workers. 00:30:04.184 ======================================================== 00:30:04.184 Latency(us) 00:30:04.184 Device Information : IOPS MiB/s Average min max 00:30:04.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7068.17 3.45 4526.69 281.87 12010.04 00:30:04.184 ======================================================== 00:30:04.184 Total : 7068.17 3.45 4526.69 281.87 12010.04 00:30:04.184 00:30:04.184 11:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:04.184 11:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.162 Initializing NVMe Controllers 00:30:14.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.162 Initialization complete. Launching workers. 00:30:14.162 ======================================================== 00:30:14.162 Latency(us) 00:30:14.162 Device Information : IOPS MiB/s Average min max 00:30:14.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2559.97 320.00 12504.87 823.16 52146.42 00:30:14.162 ======================================================== 00:30:14.162 Total : 2559.97 320.00 12504.87 823.16 52146.42 00:30:14.162 00:30:14.162 11:42:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:14.162 11:42:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:14.162 11:42:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.146 Initializing NVMe Controllers 00:30:24.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.146 Controller IO queue size 128, less than required. 00:30:24.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:24.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.146 Initialization complete. Launching workers. 00:30:24.146 ======================================================== 00:30:24.146 Latency(us) 00:30:24.146 Device Information : IOPS MiB/s Average min max 00:30:24.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11294.90 5.52 11337.21 2133.22 30297.45 00:30:24.147 ======================================================== 00:30:24.147 Total : 11294.90 5.52 11337.21 2133.22 30297.45 00:30:24.147 00:30:24.147 11:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:24.147 11:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.350 Initializing NVMe Controllers 00:30:36.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.350 Controller IO queue size 128, less than required. 00:30:36.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.350 Initialization complete. Launching workers. 00:30:36.350 ======================================================== 00:30:36.350 Latency(us) 00:30:36.350 Device Information : IOPS MiB/s Average min max 00:30:36.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1169.43 146.18 109537.42 22910.30 194920.30 00:30:36.350 ======================================================== 00:30:36.350 Total : 1169.43 146.18 109537.42 22910.30 194920.30 00:30:36.350 00:30:36.350 11:42:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.350 11:42:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f246520e-2f8a-4f5d-a886-3e4227b3c3c7 00:30:36.350 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:36.350 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 47cab8e0-5ee3-4772-9e78-dd4545bc8ad3 00:30:36.350 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:36.608 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:36.608 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:36.608 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.608 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:36.608 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.608 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.609 rmmod nvme_tcp 00:30:36.609 rmmod nvme_fabrics 00:30:36.609 rmmod nvme_keyring 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3919832 ']' 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3919832 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3919832 ']' 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3919832 00:30:36.609 11:42:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:30:36.609 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:36.609 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3919832 00:30:36.868 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:36.868 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:36.868 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3919832' 00:30:36.868 killing process with pid 3919832 00:30:36.868 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3919832 00:30:36.868 11:42:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3919832 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.245 11:42:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.781 00:30:40.781 real 1m32.929s 00:30:40.781 user 5m37.601s 00:30:40.781 sys 0m16.290s 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:40.781 ************************************ 00:30:40.781 END TEST nvmf_perf 00:30:40.781 ************************************ 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.781 ************************************ 00:30:40.781 START TEST nvmf_fio_host 00:30:40.781 ************************************ 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:40.781 * Looking for test storage... 00:30:40.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:40.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.781 --rc genhtml_branch_coverage=1 00:30:40.781 --rc genhtml_function_coverage=1 00:30:40.781 --rc genhtml_legend=1 00:30:40.781 --rc geninfo_all_blocks=1 00:30:40.781 --rc geninfo_unexecuted_blocks=1 00:30:40.781 00:30:40.781 ' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:40.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.781 --rc genhtml_branch_coverage=1 00:30:40.781 --rc genhtml_function_coverage=1 00:30:40.781 --rc genhtml_legend=1 00:30:40.781 --rc geninfo_all_blocks=1 00:30:40.781 --rc geninfo_unexecuted_blocks=1 00:30:40.781 00:30:40.781 ' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:40.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.781 --rc genhtml_branch_coverage=1 00:30:40.781 --rc genhtml_function_coverage=1 00:30:40.781 --rc genhtml_legend=1 00:30:40.781 --rc geninfo_all_blocks=1 00:30:40.781 --rc geninfo_unexecuted_blocks=1 00:30:40.781 00:30:40.781 ' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:40.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.781 --rc genhtml_branch_coverage=1 00:30:40.781 --rc genhtml_function_coverage=1 00:30:40.781 --rc genhtml_legend=1 00:30:40.781 --rc geninfo_all_blocks=1 00:30:40.781 --rc geninfo_unexecuted_blocks=1 00:30:40.781 00:30:40.781 ' 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.781 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:40.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.782 11:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.685 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:42.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:42.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:42.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:42.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.686 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:30:42.945 00:30:42.945 --- 10.0.0.2 ping statistics --- 00:30:42.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.945 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:30:42.945 00:30:42.945 --- 10.0.0.1 ping statistics --- 00:30:42.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.945 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3932672 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3932672 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3932672 ']' 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:42.945 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.945 [2024-11-02 11:42:43.233752] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:30:42.945 [2024-11-02 11:42:43.233856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.945 [2024-11-02 11:42:43.313340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.204 [2024-11-02 11:42:43.363142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.204 [2024-11-02 11:42:43.363207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.204 [2024-11-02 11:42:43.363235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.204 [2024-11-02 11:42:43.363249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.204 [2024-11-02 11:42:43.363269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.204 [2024-11-02 11:42:43.364989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.204 [2024-11-02 11:42:43.365058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.204 [2024-11-02 11:42:43.365151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.204 [2024-11-02 11:42:43.365154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.204 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:43.204 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:30:43.204 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:43.462 [2024-11-02 11:42:43.746066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.462 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:43.462 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.462 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.462 11:42:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:43.720 Malloc1 00:30:43.720 11:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.977 11:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:44.235 11:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.493 [2024-11-02 11:42:44.860922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.493 11:42:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:30:44.750 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:45.007 11:42:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:45.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:45.007 fio-3.35 00:30:45.007 Starting 1 thread 00:30:47.536 00:30:47.536 test: (groupid=0, jobs=1): err= 0: pid=3933028: Sat Nov 2 11:42:47 2024 00:30:47.536 read: IOPS=8729, BW=34.1MiB/s (35.8MB/s)(68.4MiB/2007msec) 00:30:47.536 slat (usec): min=2, max=149, avg= 2.69, stdev= 1.74 00:30:47.536 clat (usec): min=2680, max=13649, avg=8063.77, stdev=630.82 00:30:47.536 lat (usec): min=2709, max=13652, avg=8066.46, stdev=630.74 00:30:47.536 clat percentiles (usec): 00:30:47.536 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7570], 00:30:47.536 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:30:47.536 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 8979], 00:30:47.536 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[11731], 99.95th=[12649], 00:30:47.536 | 99.99th=[13566] 00:30:47.536 bw ( KiB/s): min=33264, max=35616, per=99.98%, avg=34914.00, stdev=1115.85, samples=4 00:30:47.536 iops : min= 8316, max= 8904, avg=8728.50, stdev=278.96, samples=4 00:30:47.536 write: IOPS=8729, BW=34.1MiB/s (35.8MB/s)(68.4MiB/2007msec); 0 zone resets 00:30:47.536 slat (usec): min=2, max=117, avg= 2.76, stdev= 1.39 00:30:47.536 clat (usec): min=1217, max=11906, avg=6498.98, stdev=542.20 00:30:47.536 lat (usec): min=1224, max=11909, avg=6501.74, stdev=542.15 00:30:47.536 clat percentiles (usec): 00:30:47.536 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6128], 00:30:47.536 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:30:47.536 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:30:47.536 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[10683], 99.95th=[11469], 00:30:47.536 | 99.99th=[11863] 00:30:47.536 bw ( KiB/s): min=34256, max=35376, per=99.99%, avg=34916.00, stdev=536.00, samples=4 00:30:47.536 iops : min= 8564, max= 8844, avg=8729.00, stdev=134.00, samples=4 00:30:47.536 lat (msec) : 2=0.01%, 4=0.09%, 10=99.70%, 20=0.19% 00:30:47.536 cpu : usr=59.37%, sys=35.99%, ctx=64, majf=0, minf=36 00:30:47.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:47.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:47.536 issued rwts: total=17521,17521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:47.536 00:30:47.536 Run status group 0 (all jobs): 00:30:47.536 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.4MiB (71.8MB), run=2007-2007msec 00:30:47.536 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.4MiB (71.8MB), run=2007-2007msec 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:47.536 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:47.537 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:47.537 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:47.537 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:47.537 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:47.537 11:42:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:47.794 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:47.794 fio-3.35 00:30:47.794 Starting 1 thread 00:30:50.324 00:30:50.324 test: (groupid=0, jobs=1): err= 0: pid=3933485: Sat Nov 2 11:42:50 2024 00:30:50.324 read: IOPS=7432, BW=116MiB/s (122MB/s)(233MiB/2010msec) 00:30:50.324 slat (nsec): min=2912, max=93657, avg=3861.68, stdev=2029.65 00:30:50.324 clat (usec): min=3189, max=55222, avg=10281.65, stdev=4247.57 00:30:50.324 lat (usec): min=3192, max=55227, avg=10285.51, stdev=4247.59 00:30:50.324 clat percentiles (usec): 00:30:50.324 | 1.00th=[ 5145], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8029], 00:30:50.324 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10552], 00:30:50.324 | 70.00th=[10945], 80.00th=[11863], 90.00th=[13042], 95.00th=[13960], 00:30:50.324 | 99.00th=[16581], 99.50th=[48497], 99.90th=[53216], 99.95th=[53740], 00:30:50.324 | 99.99th=[55313] 00:30:50.324 bw ( KiB/s): min=47456, max=76224, per=50.74%, avg=60344.00, stdev=13305.26, samples=4 00:30:50.324 iops : min= 2966, max= 4764, avg=3771.50, stdev=831.58, samples=4 00:30:50.324 write: IOPS=4480, BW=70.0MiB/s (73.4MB/s)(124MiB/1764msec); 0 zone resets 00:30:50.324 slat (usec): min=30, max=155, avg=34.12, stdev= 5.77 00:30:50.324 clat (usec): min=5873, max=23318, avg=12422.03, stdev=2500.18 00:30:50.324 lat (usec): min=5905, max=23351, avg=12456.15, stdev=2500.20 00:30:50.324 clat percentiles (usec): 00:30:50.324 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:30:50.324 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:30:50.324 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15926], 95.00th=[16909], 00:30:50.324 | 99.00th=[19006], 99.50th=[19530], 99.90th=[22938], 99.95th=[23200], 00:30:50.324 | 99.99th=[23200] 00:30:50.324 bw ( KiB/s): min=51424, max=77952, per=87.55%, avg=62768.00, stdev=12490.02, samples=4 00:30:50.324 iops : min= 3214, max= 4872, avg=3923.00, stdev=780.63, samples=4 00:30:50.324 lat (msec) : 4=0.08%, 10=38.83%, 20=60.40%, 50=0.47%, 100=0.22% 00:30:50.324 cpu : usr=70.98%, sys=25.54%, ctx=39, majf=0, minf=60 00:30:50.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:30:50.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:50.324 issued rwts: total=14940,7904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.324 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:50.324 00:30:50.324 Run status group 0 (all jobs): 00:30:50.324 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=233MiB (245MB), run=2010-2010msec 00:30:50.324 WRITE: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=124MiB (129MB), run=1764-1764msec 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:30:50.324 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:50.582 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:50.582 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:50.582 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:50.582 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:30:50.582 11:42:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:53.859 Nvme0n1 00:30:53.860 11:42:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9857f4c1-18e7-4a7d-89da-42a9c384e8e6 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9857f4c1-18e7-4a7d-89da-42a9c384e8e6 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=9857f4c1-18e7-4a7d-89da-42a9c384e8e6 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:30:57.139 11:42:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:30:57.139 { 00:30:57.139 "uuid": "9857f4c1-18e7-4a7d-89da-42a9c384e8e6", 00:30:57.139 "name": "lvs_0", 00:30:57.139 "base_bdev": "Nvme0n1", 00:30:57.139 "total_data_clusters": 930, 00:30:57.139 "free_clusters": 930, 00:30:57.139 "block_size": 512, 00:30:57.139 "cluster_size": 1073741824 00:30:57.139 } 00:30:57.139 ]' 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="9857f4c1-18e7-4a7d-89da-42a9c384e8e6") .free_clusters' 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=930 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="9857f4c1-18e7-4a7d-89da-42a9c384e8e6") .cluster_size' 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=952320 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 952320 00:30:57.139 952320 00:30:57.139 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:57.397 066b191b-1aa3-4aea-8292-697ed585811a 00:30:57.397 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:57.655 11:42:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:57.912 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:58.170 11:42:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:58.428 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:58.428 fio-3.35 00:30:58.428 Starting 1 thread 00:31:01.007 00:31:01.007 test: (groupid=0, jobs=1): err= 0: pid=3934762: Sat Nov 2 11:43:01 2024 00:31:01.007 read: IOPS=5867, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2008msec) 00:31:01.007 slat (usec): min=2, max=181, avg= 2.78, stdev= 2.81 00:31:01.007 clat (usec): min=937, max=171659, avg=11964.94, stdev=11767.78 00:31:01.007 lat (usec): min=940, max=171704, avg=11967.72, stdev=11768.17 00:31:01.007 clat percentiles (msec): 00:31:01.007 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:01.007 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:31:01.007 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:01.007 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:01.007 | 99.99th=[ 171] 00:31:01.007 bw ( KiB/s): min=16200, max=26072, per=99.82%, avg=23426.00, stdev=4820.66, samples=4 00:31:01.007 iops : min= 4050, max= 6518, avg=5856.50, stdev=1205.16, samples=4 00:31:01.007 write: IOPS=5856, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2008msec); 0 zone resets 00:31:01.007 slat (usec): min=2, max=144, avg= 2.85, stdev= 1.98 00:31:01.007 clat (usec): min=356, max=169115, avg=9666.90, stdev=11029.49 00:31:01.007 lat (usec): min=359, max=169122, avg=9669.75, stdev=11029.84 00:31:01.007 clat percentiles (msec): 00:31:01.007 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:01.007 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:01.007 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:01.007 | 99.00th=[ 12], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:31:01.007 | 99.99th=[ 169] 00:31:01.007 bw ( KiB/s): min=17256, max=25640, per=99.90%, avg=23402.00, stdev=4101.11, samples=4 00:31:01.007 iops : min= 4314, max= 6410, avg=5850.50, stdev=1025.28, samples=4 00:31:01.007 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:01.007 lat (msec) : 2=0.03%, 4=0.10%, 10=51.65%, 20=47.65%, 250=0.54% 00:31:01.007 cpu : usr=57.70%, sys=38.47%, ctx=109, majf=0, minf=36 00:31:01.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.007 issued rwts: total=11781,11760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.007 00:31:01.007 Run status group 0 (all jobs): 00:31:01.007 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.3MB), run=2008-2008msec 00:31:01.007 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.2MB), run=2008-2008msec 00:31:01.007 11:43:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:01.008 11:43:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c42aa996-4e4a-4a02-adb2-c6332aa09e24 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c42aa996-4e4a-4a02-adb2-c6332aa09e24 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=c42aa996-4e4a-4a02-adb2-c6332aa09e24 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:31:02.381 { 00:31:02.381 "uuid": "9857f4c1-18e7-4a7d-89da-42a9c384e8e6", 00:31:02.381 "name": "lvs_0", 00:31:02.381 "base_bdev": "Nvme0n1", 00:31:02.381 "total_data_clusters": 930, 00:31:02.381 "free_clusters": 0, 00:31:02.381 "block_size": 512, 00:31:02.381 "cluster_size": 1073741824 00:31:02.381 }, 00:31:02.381 { 00:31:02.381 "uuid": "c42aa996-4e4a-4a02-adb2-c6332aa09e24", 00:31:02.381 "name": "lvs_n_0", 00:31:02.381 "base_bdev": "066b191b-1aa3-4aea-8292-697ed585811a", 00:31:02.381 "total_data_clusters": 237847, 00:31:02.381 "free_clusters": 237847, 00:31:02.381 "block_size": 512, 00:31:02.381 "cluster_size": 4194304 00:31:02.381 } 00:31:02.381 ]' 00:31:02.381 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="c42aa996-4e4a-4a02-adb2-c6332aa09e24") .free_clusters' 00:31:02.638 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=237847 00:31:02.638 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="c42aa996-4e4a-4a02-adb2-c6332aa09e24") .cluster_size' 00:31:02.638 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:31:02.638 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=951388 00:31:02.638 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 951388 00:31:02.639 951388 00:31:02.639 11:43:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:03.203 1390e953-b831-4ac9-bee2-b4ca1ffaa87c 00:31:03.204 11:43:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:03.461 11:43:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:03.719 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:31:03.977 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:04.235 11:43:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:04.235 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:04.235 fio-3.35 00:31:04.235 Starting 1 thread 00:31:06.760 00:31:06.760 test: (groupid=0, jobs=1): err= 0: pid=3935618: Sat Nov 2 11:43:07 2024 00:31:06.760 read: IOPS=5462, BW=21.3MiB/s (22.4MB/s)(42.8MiB/2008msec) 00:31:06.760 slat (nsec): min=1924, max=158908, avg=2623.64, stdev=2425.39 00:31:06.760 clat (usec): min=4721, max=21363, avg=12909.98, stdev=1140.64 00:31:06.760 lat (usec): min=4736, max=21365, avg=12912.61, stdev=1140.56 00:31:06.760 clat percentiles (usec): 00:31:06.760 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11600], 20.00th=[11994], 00:31:06.760 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:31:06.760 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:31:06.760 | 99.00th=[15270], 99.50th=[15664], 99.90th=[20317], 99.95th=[20317], 00:31:06.760 | 99.99th=[20579] 00:31:06.760 bw ( KiB/s): min=20584, max=22352, per=99.67%, avg=21776.00, stdev=821.71, samples=4 00:31:06.760 iops : min= 5146, max= 5588, avg=5444.00, stdev=205.43, samples=4 00:31:06.760 write: IOPS=5435, BW=21.2MiB/s (22.3MB/s)(42.6MiB/2008msec); 0 zone resets 00:31:06.760 slat (nsec): min=1977, max=160685, avg=2735.15, stdev=2075.34 00:31:06.760 clat (usec): min=2318, max=18973, avg=10384.62, stdev=945.20 00:31:06.760 lat (usec): min=2324, max=18976, avg=10387.35, stdev=945.17 00:31:06.760 clat percentiles (usec): 00:31:06.760 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:31:06.760 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:31:06.760 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:31:06.760 | 99.00th=[12387], 99.50th=[12518], 99.90th=[16450], 99.95th=[17695], 00:31:06.760 | 99.99th=[19006] 00:31:06.760 bw ( KiB/s): min=21568, max=22072, per=99.97%, avg=21734.00, stdev=228.56, samples=4 00:31:06.760 iops : min= 5392, max= 5518, avg=5433.50, stdev=57.14, samples=4 00:31:06.760 lat (msec) : 4=0.05%, 10=16.05%, 20=83.84%, 50=0.06% 00:31:06.760 cpu : usr=58.20%, sys=38.07%, ctx=87, majf=0, minf=36 00:31:06.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:06.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:06.760 issued rwts: total=10968,10914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:06.760 00:31:06.760 Run status group 0 (all jobs): 00:31:06.760 READ: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s), io=42.8MiB (44.9MB), run=2008-2008msec 00:31:06.760 WRITE: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=42.6MiB (44.7MB), run=2008-2008msec 00:31:06.760 11:43:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:07.018 11:43:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:07.018 11:43:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:11.215 11:43:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:11.215 11:43:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:14.508 11:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:14.508 11:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.412 rmmod nvme_tcp 00:31:16.412 rmmod nvme_fabrics 00:31:16.412 rmmod nvme_keyring 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3932672 ']' 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3932672 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3932672 ']' 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3932672 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3932672 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3932672' 00:31:16.412 killing process with pid 3932672 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3932672 00:31:16.412 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3932672 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.673 11:43:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.211 00:31:19.211 real 0m38.307s 00:31:19.211 user 2m27.031s 00:31:19.211 sys 0m7.265s 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.211 ************************************ 00:31:19.211 END TEST nvmf_fio_host 00:31:19.211 ************************************ 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.211 ************************************ 00:31:19.211 START TEST nvmf_failover 00:31:19.211 ************************************ 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:19.211 * Looking for test storage... 00:31:19.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:19.211 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.212 --rc genhtml_branch_coverage=1 00:31:19.212 --rc genhtml_function_coverage=1 00:31:19.212 --rc genhtml_legend=1 00:31:19.212 --rc geninfo_all_blocks=1 00:31:19.212 --rc geninfo_unexecuted_blocks=1 00:31:19.212 00:31:19.212 ' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.212 --rc genhtml_branch_coverage=1 00:31:19.212 --rc genhtml_function_coverage=1 00:31:19.212 --rc genhtml_legend=1 00:31:19.212 --rc geninfo_all_blocks=1 00:31:19.212 --rc geninfo_unexecuted_blocks=1 00:31:19.212 00:31:19.212 ' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.212 --rc genhtml_branch_coverage=1 00:31:19.212 --rc genhtml_function_coverage=1 00:31:19.212 --rc genhtml_legend=1 00:31:19.212 --rc geninfo_all_blocks=1 00:31:19.212 --rc geninfo_unexecuted_blocks=1 00:31:19.212 00:31:19.212 ' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:19.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.212 --rc genhtml_branch_coverage=1 00:31:19.212 --rc genhtml_function_coverage=1 00:31:19.212 --rc genhtml_legend=1 00:31:19.212 --rc geninfo_all_blocks=1 00:31:19.212 --rc geninfo_unexecuted_blocks=1 00:31:19.212 00:31:19.212 ' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:19.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.212 11:43:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.118 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:21.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:21.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:21.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:21.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:31:21.119 00:31:21.119 --- 10.0.0.2 ping statistics --- 00:31:21.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.119 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:31:21.119 00:31:21.119 --- 10.0.0.1 ping statistics --- 00:31:21.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.119 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3938878 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3938878 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3938878 ']' 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.119 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.119 [2024-11-02 11:43:21.342952] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:31:21.120 [2024-11-02 11:43:21.343041] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.120 [2024-11-02 11:43:21.433828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:21.120 [2024-11-02 11:43:21.485986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.120 [2024-11-02 11:43:21.486042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.120 [2024-11-02 11:43:21.486072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.120 [2024-11-02 11:43:21.486093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.120 [2024-11-02 11:43:21.486110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.120 [2024-11-02 11:43:21.487887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.120 [2024-11-02 11:43:21.487953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.120 [2024-11-02 11:43:21.487961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.378 11:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.636 [2024-11-02 11:43:22.000096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.636 11:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:21.894 Malloc0 00:31:22.152 11:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.410 11:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.667 11:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.925 [2024-11-02 11:43:23.112289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.925 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:23.199 [2024-11-02 11:43:23.372991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.199 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:23.465 [2024-11-02 11:43:23.649913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3939169 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3939169 /var/tmp/bdevperf.sock 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3939169 ']' 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:23.465 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:23.722 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:23.722 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:31:23.722 11:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:24.292 NVMe0n1 00:31:24.292 11:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:24.552 00:31:24.552 11:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3939303 00:31:24.552 11:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:24.552 11:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:25.486 11:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.746 11:43:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:29.028 11:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:29.322 00:31:29.322 11:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:29.582 11:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:32.872 11:43:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.872 [2024-11-02 11:43:33.152119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.872 11:43:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:33.804 11:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:34.064 [2024-11-02 11:43:34.453767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.453988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.064 [2024-11-02 11:43:34.454118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505570 is same with the state(6) to be set 00:31:34.324 11:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3939303 00:31:39.590 { 00:31:39.590 "results": [ 00:31:39.590 { 00:31:39.590 "job": "NVMe0n1", 00:31:39.590 "core_mask": "0x1", 00:31:39.590 "workload": "verify", 00:31:39.590 "status": "finished", 00:31:39.590 "verify_range": { 00:31:39.590 "start": 0, 00:31:39.590 "length": 16384 00:31:39.590 }, 00:31:39.590 "queue_depth": 128, 00:31:39.590 "io_size": 4096, 00:31:39.590 "runtime": 15.015566, 00:31:39.590 "iops": 7960.938668578993, 00:31:39.590 "mibps": 31.097416674136692, 00:31:39.590 "io_failed": 13628, 00:31:39.590 "io_timeout": 0, 00:31:39.590 "avg_latency_us": 14406.004277056594, 00:31:39.590 "min_latency_us": 819.2, 00:31:39.590 "max_latency_us": 30486.376296296297 00:31:39.590 } 00:31:39.590 ], 00:31:39.590 "core_count": 1 00:31:39.590 } 00:31:39.590 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3939169 00:31:39.590 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3939169 ']' 00:31:39.590 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3939169 00:31:39.590 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:31:39.590 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:39.590 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3939169 00:31:39.855 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:39.855 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:39.855 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3939169' 00:31:39.855 killing process with pid 3939169 00:31:39.855 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3939169 00:31:39.856 11:43:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3939169 00:31:39.856 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:39.856 [2024-11-02 11:43:23.710857] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:31:39.856 [2024-11-02 11:43:23.710959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939169 ] 00:31:39.856 [2024-11-02 11:43:23.779605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.856 [2024-11-02 11:43:23.825805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.856 Running I/O for 15 seconds... 00:31:39.856 8362.00 IOPS, 32.66 MiB/s [2024-11-02T10:43:40.258Z] [2024-11-02 11:43:26.041125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.041938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.856 [2024-11-02 11:43:26.041971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.041987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.856 [2024-11-02 11:43:26.042343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.856 [2024-11-02 11:43:26.042358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.042848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.042876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.042904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.042934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.042962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.042977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.042990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.857 [2024-11-02 11:43:26.043346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.857 [2024-11-02 11:43:26.043586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.857 [2024-11-02 11:43:26.043600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.043978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.043993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.858 [2024-11-02 11:43:26.044175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.858 [2024-11-02 11:43:26.044801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.858 [2024-11-02 11:43:26.044814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.044842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.044928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.044955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.044984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.044999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.045040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.045072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.045101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:26.045130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.859 [2024-11-02 11:43:26.045176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.859 [2024-11-02 11:43:26.045188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:31:39.859 [2024-11-02 11:43:26.045201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045298] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:39.859 [2024-11-02 11:43:26.045340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.859 [2024-11-02 11:43:26.045359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.859 [2024-11-02 11:43:26.045389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.859 [2024-11-02 11:43:26.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.859 [2024-11-02 11:43:26.045444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:26.045458] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:39.859 [2024-11-02 11:43:26.048788] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:39.859 [2024-11-02 11:43:26.048828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb40890 (9): Bad file descriptor 00:31:39.859 [2024-11-02 11:43:26.079898] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:39.859 7771.50 IOPS, 30.36 MiB/s [2024-11-02T10:43:40.261Z] 7639.67 IOPS, 29.84 MiB/s [2024-11-02T10:43:40.261Z] 7598.25 IOPS, 29.68 MiB/s [2024-11-02T10:43:40.261Z] [2024-11-02 11:43:29.870027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.859 [2024-11-02 11:43:29.870402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.859 [2024-11-02 11:43:29.870794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.859 [2024-11-02 11:43:29.870808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.870836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.870863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.870890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.870917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.870945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.870973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.870986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.860 [2024-11-02 11:43:29.871383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.860 [2024-11-02 11:43:29.871768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.860 [2024-11-02 11:43:29.871783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.871982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.871996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.872010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.872038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.872066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.861 [2024-11-02 11:43:29.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.872976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.861 [2024-11-02 11:43:29.872989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.861 [2024-11-02 11:43:29.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.862 [2024-11-02 11:43:29.873816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.873975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.873989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.874004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.862 [2024-11-02 11:43:29.874018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.874032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63760 is same with the state(6) to be set 00:31:39.862 [2024-11-02 11:43:29.874050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.862 [2024-11-02 11:43:29.874061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.862 [2024-11-02 11:43:29.874076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48936 len:8 PRP1 0x0 PRP2 0x0 00:31:39.862 [2024-11-02 11:43:29.874089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.874150] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:39.862 [2024-11-02 11:43:29.874187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.862 [2024-11-02 11:43:29.874221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.874237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.862 [2024-11-02 11:43:29.874251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.874273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.862 [2024-11-02 11:43:29.874288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.862 [2024-11-02 11:43:29.874303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.862 [2024-11-02 11:43:29.874316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:29.874330] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:39.863 [2024-11-02 11:43:29.877612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:39.863 [2024-11-02 11:43:29.877651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb40890 (9): Bad file descriptor 00:31:39.863 7515.00 IOPS, 29.36 MiB/s [2024-11-02T10:43:40.265Z] [2024-11-02 11:43:30.035504] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:39.863 7491.00 IOPS, 29.26 MiB/s [2024-11-02T10:43:40.265Z] 7639.14 IOPS, 29.84 MiB/s [2024-11-02T10:43:40.265Z] 7750.88 IOPS, 30.28 MiB/s [2024-11-02T10:43:40.265Z] 7840.56 IOPS, 30.63 MiB/s [2024-11-02T10:43:40.265Z] [2024-11-02 11:43:34.456082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:39.863 [2024-11-02 11:43:34.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.456868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.456896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.456923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.456951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.456978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.456992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.863 [2024-11-02 11:43:34.457286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.863 [2024-11-02 11:43:34.457304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.457985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.457998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:39.864 [2024-11-02 11:43:34.458292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.864 [2024-11-02 11:43:34.458330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.864 [2024-11-02 11:43:34.458348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:8 PRP1 0x0 PRP2 0x0 00:31:39.864 [2024-11-02 11:43:34.458362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5864 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5872 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5880 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5896 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5904 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5912 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5928 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5936 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5944 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.458965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.458975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.458985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5960 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.458998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5968 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5976 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5992 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6000 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6008 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6024 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6032 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.865 [2024-11-02 11:43:34.459502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6040 len:8 PRP1 0x0 PRP2 0x0 00:31:39.865 [2024-11-02 11:43:34.459517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.865 [2024-11-02 11:43:34.459530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.865 [2024-11-02 11:43:34.459541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6056 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6064 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6072 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6088 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6096 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6104 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.459958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.459971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.459993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6120 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6128 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6136 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6152 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6160 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6168 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6184 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6192 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6200 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6216 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6224 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.866 [2024-11-02 11:43:34.460786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.866 [2024-11-02 11:43:34.460803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6232 len:8 PRP1 0x0 PRP2 0x0 00:31:39.866 [2024-11-02 11:43:34.460815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.866 [2024-11-02 11:43:34.460828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.460838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.460849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.460861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.460875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.460885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.460895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6248 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.460907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.460919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.460930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6256 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.460953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.460966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.460976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.475921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6264 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.475952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.475970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.475982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.475993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.476006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.476036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.476048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6280 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.476061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.476084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.476096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6288 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.476108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.476130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.476143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6296 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.476157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.476180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.476191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.476204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:39.867 [2024-11-02 11:43:34.476228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:39.867 [2024-11-02 11:43:34.476240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5464 len:8 PRP1 0x0 PRP2 0x0 00:31:39.867 [2024-11-02 11:43:34.476252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476353] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:39.867 [2024-11-02 11:43:34.476399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.867 [2024-11-02 11:43:34.476419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.867 [2024-11-02 11:43:34.476449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.867 [2024-11-02 11:43:34.476489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.867 [2024-11-02 11:43:34.476517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.867 [2024-11-02 11:43:34.476543] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:39.867 [2024-11-02 11:43:34.476616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb40890 (9): Bad file descriptor 00:31:39.867 [2024-11-02 11:43:34.479913] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:39.867 [2024-11-02 11:43:34.637982] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:39.867 7749.30 IOPS, 30.27 MiB/s [2024-11-02T10:43:40.269Z] 7802.73 IOPS, 30.48 MiB/s [2024-11-02T10:43:40.269Z] 7840.08 IOPS, 30.63 MiB/s [2024-11-02T10:43:40.269Z] 7892.62 IOPS, 30.83 MiB/s [2024-11-02T10:43:40.269Z] 7923.57 IOPS, 30.95 MiB/s [2024-11-02T10:43:40.269Z] 7960.67 IOPS, 31.10 MiB/s 00:31:39.867 Latency(us) 00:31:39.867 [2024-11-02T10:43:40.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.867 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:39.867 Verification LBA range: start 0x0 length 0x4000 00:31:39.867 NVMe0n1 : 15.02 7960.94 31.10 907.59 0.00 14406.00 819.20 30486.38 00:31:39.867 [2024-11-02T10:43:40.269Z] =================================================================================================================== 00:31:39.867 [2024-11-02T10:43:40.269Z] Total : 7960.94 31.10 907.59 0.00 14406.00 819.20 30486.38 00:31:39.867 Received shutdown signal, test time was about 15.000000 seconds 00:31:39.867 00:31:39.867 Latency(us) 00:31:39.867 [2024-11-02T10:43:40.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.867 [2024-11-02T10:43:40.269Z] =================================================================================================================== 00:31:39.867 [2024-11-02T10:43:40.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3941135 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3941135 /var/tmp/bdevperf.sock 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3941135 ']' 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:39.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:39.867 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.125 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:40.125 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:31:40.125 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:40.384 [2024-11-02 11:43:40.734734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:40.384 11:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:40.642 [2024-11-02 11:43:41.003443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:40.642 11:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:41.257 NVMe0n1 00:31:41.257 11:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:41.561 00:31:41.561 11:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:41.847 00:31:41.847 11:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:41.847 11:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:42.105 11:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:42.364 11:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:45.653 11:43:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:45.653 11:43:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:45.653 11:43:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3941808 00:31:45.653 11:43:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:45.653 11:43:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3941808 00:31:47.032 { 00:31:47.032 "results": [ 00:31:47.032 { 00:31:47.032 "job": "NVMe0n1", 00:31:47.032 "core_mask": "0x1", 00:31:47.032 "workload": "verify", 00:31:47.032 "status": "finished", 00:31:47.032 "verify_range": { 00:31:47.032 "start": 0, 00:31:47.032 "length": 16384 00:31:47.032 }, 00:31:47.032 "queue_depth": 128, 00:31:47.032 "io_size": 4096, 00:31:47.032 "runtime": 1.006944, 00:31:47.032 "iops": 8315.258842597006, 00:31:47.032 "mibps": 32.481479853894555, 00:31:47.032 "io_failed": 0, 00:31:47.032 "io_timeout": 0, 00:31:47.032 "avg_latency_us": 15332.07387201366, 00:31:47.032 "min_latency_us": 1049.7896296296296, 00:31:47.032 "max_latency_us": 16990.814814814814 00:31:47.032 } 00:31:47.032 ], 00:31:47.032 "core_count": 1 00:31:47.032 } 00:31:47.032 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:47.032 [2024-11-02 11:43:40.254962] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:31:47.032 [2024-11-02 11:43:40.255063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3941135 ] 00:31:47.032 [2024-11-02 11:43:40.323766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.032 [2024-11-02 11:43:40.367537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.032 [2024-11-02 11:43:42.664816] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:47.032 [2024-11-02 11:43:42.664919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.032 [2024-11-02 11:43:42.664942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.032 [2024-11-02 11:43:42.664960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.032 [2024-11-02 11:43:42.664974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.032 [2024-11-02 11:43:42.664988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.032 [2024-11-02 11:43:42.665001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.032 [2024-11-02 11:43:42.665016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.032 [2024-11-02 11:43:42.665029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.032 [2024-11-02 11:43:42.665043] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:31:47.032 [2024-11-02 11:43:42.665095] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:31:47.032 [2024-11-02 11:43:42.665128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfc890 (9): Bad file descriptor 00:31:47.032 [2024-11-02 11:43:42.767459] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:31:47.032 Running I/O for 1 seconds... 00:31:47.032 8245.00 IOPS, 32.21 MiB/s 00:31:47.032 Latency(us) 00:31:47.032 [2024-11-02T10:43:47.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.032 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:47.032 Verification LBA range: start 0x0 length 0x4000 00:31:47.032 NVMe0n1 : 1.01 8315.26 32.48 0.00 0.00 15332.07 1049.79 16990.81 00:31:47.032 [2024-11-02T10:43:47.434Z] =================================================================================================================== 00:31:47.032 [2024-11-02T10:43:47.434Z] Total : 8315.26 32.48 0.00 0.00 15332.07 1049.79 16990.81 00:31:47.032 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:47.032 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:47.032 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:47.290 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:47.290 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:47.548 11:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.118 11:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3941135 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3941135 ']' 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3941135 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3941135 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3941135' 00:31:51.407 killing process with pid 3941135 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3941135 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3941135 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:51.407 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.665 11:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.665 rmmod nvme_tcp 00:31:51.665 rmmod nvme_fabrics 00:31:51.665 rmmod nvme_keyring 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3938878 ']' 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3938878 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3938878 ']' 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3938878 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:51.665 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3938878 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3938878' 00:31:51.924 killing process with pid 3938878 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3938878 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3938878 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.924 11:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.467 00:31:54.467 real 0m35.305s 00:31:54.467 user 2m3.747s 00:31:54.467 sys 0m6.338s 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:54.467 ************************************ 00:31:54.467 END TEST nvmf_failover 00:31:54.467 ************************************ 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.467 ************************************ 00:31:54.467 START TEST nvmf_host_discovery 00:31:54.467 ************************************ 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:54.467 * Looking for test storage... 00:31:54.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:54.467 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:54.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.468 --rc genhtml_branch_coverage=1 00:31:54.468 --rc genhtml_function_coverage=1 00:31:54.468 --rc genhtml_legend=1 00:31:54.468 --rc geninfo_all_blocks=1 00:31:54.468 --rc geninfo_unexecuted_blocks=1 00:31:54.468 00:31:54.468 ' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:54.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.468 --rc genhtml_branch_coverage=1 00:31:54.468 --rc genhtml_function_coverage=1 00:31:54.468 --rc genhtml_legend=1 00:31:54.468 --rc geninfo_all_blocks=1 00:31:54.468 --rc geninfo_unexecuted_blocks=1 00:31:54.468 00:31:54.468 ' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:54.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.468 --rc genhtml_branch_coverage=1 00:31:54.468 --rc genhtml_function_coverage=1 00:31:54.468 --rc genhtml_legend=1 00:31:54.468 --rc geninfo_all_blocks=1 00:31:54.468 --rc geninfo_unexecuted_blocks=1 00:31:54.468 00:31:54.468 ' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:54.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.468 --rc genhtml_branch_coverage=1 00:31:54.468 --rc genhtml_function_coverage=1 00:31:54.468 --rc genhtml_legend=1 00:31:54.468 --rc geninfo_all_blocks=1 00:31:54.468 --rc geninfo_unexecuted_blocks=1 00:31:54.468 00:31:54.468 ' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.468 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.469 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.469 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.469 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.469 11:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:56.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:56.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:56.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:56.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.376 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:31:56.377 00:31:56.377 --- 10.0.0.2 ping statistics --- 00:31:56.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.377 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:31:56.377 00:31:56.377 --- 10.0.0.1 ping statistics --- 00:31:56.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.377 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3944415 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3944415 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3944415 ']' 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:56.377 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.377 [2024-11-02 11:43:56.689143] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:31:56.377 [2024-11-02 11:43:56.689229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.377 [2024-11-02 11:43:56.767773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.637 [2024-11-02 11:43:56.820249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.637 [2024-11-02 11:43:56.820353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.637 [2024-11-02 11:43:56.820367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.637 [2024-11-02 11:43:56.820378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.637 [2024-11-02 11:43:56.820387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.637 [2024-11-02 11:43:56.821052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.637 [2024-11-02 11:43:56.991827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.637 11:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.637 [2024-11-02 11:43:57.000004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.637 null0 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.637 null1 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3944546 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3944546 /tmp/host.sock 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3944546 ']' 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:56.637 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:56.637 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:56.896 [2024-11-02 11:43:57.073379] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:31:56.896 [2024-11-02 11:43:57.073452] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3944546 ] 00:31:56.896 [2024-11-02 11:43:57.143644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.896 [2024-11-02 11:43:57.192985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.154 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:57.155 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 [2024-11-02 11:43:57.641684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:57.414 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.673 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:31:57.673 11:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:31:58.239 [2024-11-02 11:43:58.410448] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:58.239 [2024-11-02 11:43:58.410489] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:58.239 [2024-11-02 11:43:58.410512] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:58.239 [2024-11-02 11:43:58.497812] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:58.497 [2024-11-02 11:43:58.678015] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:58.497 [2024-11-02 11:43:58.679130] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22696f0:1 started. 00:31:58.497 [2024-11-02 11:43:58.681114] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:58.497 [2024-11-02 11:43:58.681140] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:58.497 [2024-11-02 11:43:58.688208] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22696f0 was disconnected and freed. delete nvme_qpair. 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.497 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:58.498 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 11:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:58.757 [2024-11-02 11:43:59.001452] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2237f40:1 started. 00:31:58.757 [2024-11-02 11:43:59.008685] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2237f40 was disconnected and freed. delete nvme_qpair. 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.757 [2024-11-02 11:43:59.077839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:58.757 [2024-11-02 11:43:59.078224] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:58.757 [2024-11-02 11:43:59.078265] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:58.757 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:58.758 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.016 [2024-11-02 11:43:59.205125] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:59.016 11:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:31:59.275 [2024-11-02 11:43:59.507786] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:31:59.275 [2024-11-02 11:43:59.507849] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:59.275 [2024-11-02 11:43:59.507866] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:59.275 [2024-11-02 11:43:59.507874] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:59.840 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.100 [2024-11-02 11:44:00.306148] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:00.100 [2024-11-02 11:44:00.306189] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.100 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:00.101 [2024-11-02 11:44:00.312396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:32:00.101 id:0 cdw10:00000000 cdw11:00000000 00:32:00.101 [2024-11-02 11:44:00.312447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.101 [2024-11-02 11:44:00.312466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.101 [2024-11-02 11:44:00.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.101 [2024-11-02 11:44:00.312506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.101 [2024-11-02 11:44:00.312519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.101 [2024-11-02 11:44:00.312533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:00.101 [2024-11-02 11:44:00.312547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.101 [2024-11-02 11:44:00.312560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:00.101 [2024-11-02 11:44:00.322389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.101 [2024-11-02 11:44:00.332431] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:00.101 [2024-11-02 11:44:00.332456] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:00.101 [2024-11-02 11:44:00.332468] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.332477] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.101 [2024-11-02 11:44:00.332510] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.332771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.101 [2024-11-02 11:44:00.332801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223b900 with addr=10.0.0.2, port=4420 00:32:00.101 [2024-11-02 11:44:00.332819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.101 [2024-11-02 11:44:00.332842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.101 [2024-11-02 11:44:00.332864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.101 [2024-11-02 11:44:00.332894] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.101 [2024-11-02 11:44:00.332910] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.101 [2024-11-02 11:44:00.332924] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:00.101 [2024-11-02 11:44:00.332940] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:00.101 [2024-11-02 11:44:00.332957] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.101 [2024-11-02 11:44:00.342543] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:00.101 [2024-11-02 11:44:00.342564] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:00.101 [2024-11-02 11:44:00.342573] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.342581] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.101 [2024-11-02 11:44:00.342619] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.342818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.101 [2024-11-02 11:44:00.342847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223b900 with addr=10.0.0.2, port=4420 00:32:00.101 [2024-11-02 11:44:00.342863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.101 [2024-11-02 11:44:00.342896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.101 [2024-11-02 11:44:00.342920] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.101 [2024-11-02 11:44:00.342934] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.101 [2024-11-02 11:44:00.342948] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.101 [2024-11-02 11:44:00.342961] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:00.101 [2024-11-02 11:44:00.342987] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:00.101 [2024-11-02 11:44:00.343002] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.101 [2024-11-02 11:44:00.352655] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:00.101 [2024-11-02 11:44:00.352679] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:00.101 [2024-11-02 11:44:00.352703] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.352710] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.101 [2024-11-02 11:44:00.352735] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.352973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.101 [2024-11-02 11:44:00.353002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223b900 with addr=10.0.0.2, port=4420 00:32:00.101 [2024-11-02 11:44:00.353019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.101 [2024-11-02 11:44:00.353042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.101 [2024-11-02 11:44:00.353063] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.101 [2024-11-02 11:44:00.353078] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.101 [2024-11-02 11:44:00.353091] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.101 [2024-11-02 11:44:00.353111] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:00.101 [2024-11-02 11:44:00.353123] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:00.101 [2024-11-02 11:44:00.353138] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:00.101 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:00.101 [2024-11-02 11:44:00.363498] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:00.101 [2024-11-02 11:44:00.363528] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:00.101 [2024-11-02 11:44:00.363553] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.363562] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.101 [2024-11-02 11:44:00.363588] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:00.101 [2024-11-02 11:44:00.363807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.101 [2024-11-02 11:44:00.363837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223b900 with addr=10.0.0.2, port=4420 00:32:00.102 [2024-11-02 11:44:00.363854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.102 [2024-11-02 11:44:00.363877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.102 [2024-11-02 11:44:00.363898] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.102 [2024-11-02 11:44:00.363913] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.102 [2024-11-02 11:44:00.363926] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.102 [2024-11-02 11:44:00.363939] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:00.102 [2024-11-02 11:44:00.363948] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:00.102 [2024-11-02 11:44:00.363964] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.102 [2024-11-02 11:44:00.373621] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:00.102 [2024-11-02 11:44:00.373645] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:00.102 [2024-11-02 11:44:00.373654] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:00.102 [2024-11-02 11:44:00.373662] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.102 [2024-11-02 11:44:00.373686] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:00.102 [2024-11-02 11:44:00.373859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.102 [2024-11-02 11:44:00.373888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223b900 with addr=10.0.0.2, port=4420 00:32:00.102 [2024-11-02 11:44:00.373905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.102 [2024-11-02 11:44:00.373927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.102 [2024-11-02 11:44:00.373948] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.102 [2024-11-02 11:44:00.373962] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.102 [2024-11-02 11:44:00.373976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.102 [2024-11-02 11:44:00.373988] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:00.102 [2024-11-02 11:44:00.373997] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:00.102 [2024-11-02 11:44:00.374012] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.102 [2024-11-02 11:44:00.383719] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:00.102 [2024-11-02 11:44:00.383742] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:00.102 [2024-11-02 11:44:00.383751] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:00.102 [2024-11-02 11:44:00.383759] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:00.102 [2024-11-02 11:44:00.383784] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:00.102 [2024-11-02 11:44:00.383940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:00.102 [2024-11-02 11:44:00.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223b900 with addr=10.0.0.2, port=4420 00:32:00.102 [2024-11-02 11:44:00.383985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b900 is same with the state(6) to be set 00:32:00.102 [2024-11-02 11:44:00.384008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223b900 (9): Bad file descriptor 00:32:00.102 [2024-11-02 11:44:00.384029] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:00.102 [2024-11-02 11:44:00.384044] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:00.102 [2024-11-02 11:44:00.384057] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:00.102 [2024-11-02 11:44:00.384070] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:00.102 [2024-11-02 11:44:00.384079] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:00.102 [2024-11-02 11:44:00.384099] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.102 [2024-11-02 11:44:00.393634] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:00.102 [2024-11-02 11:44:00.393665] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.102 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:00.360 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.361 11:44:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.294 [2024-11-02 11:44:01.648163] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:01.294 [2024-11-02 11:44:01.648192] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:01.294 [2024-11-02 11:44:01.648216] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:01.552 [2024-11-02 11:44:01.735504] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:01.552 [2024-11-02 11:44:01.800218] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:01.552 [2024-11-02 11:44:01.801007] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x22510b0:1 started. 00:32:01.552 [2024-11-02 11:44:01.803317] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:01.552 [2024-11-02 11:44:01.803348] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:01.552 [2024-11-02 11:44:01.806652] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x22510b0 was disconnected and freed. delete nvme_qpair. 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.552 request: 00:32:01.552 { 00:32:01.552 "name": "nvme", 00:32:01.552 "trtype": "tcp", 00:32:01.552 "traddr": "10.0.0.2", 00:32:01.552 "adrfam": "ipv4", 00:32:01.552 "trsvcid": "8009", 00:32:01.552 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:01.552 "wait_for_attach": true, 00:32:01.552 "method": "bdev_nvme_start_discovery", 00:32:01.552 "req_id": 1 00:32:01.552 } 00:32:01.552 Got JSON-RPC error response 00:32:01.552 response: 00:32:01.552 { 00:32:01.552 "code": -17, 00:32:01.552 "message": "File exists" 00:32:01.552 } 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:01.552 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.553 request: 00:32:01.553 { 00:32:01.553 "name": "nvme_second", 00:32:01.553 "trtype": "tcp", 00:32:01.553 "traddr": "10.0.0.2", 00:32:01.553 "adrfam": "ipv4", 00:32:01.553 "trsvcid": "8009", 00:32:01.553 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:01.553 "wait_for_attach": true, 00:32:01.553 "method": "bdev_nvme_start_discovery", 00:32:01.553 "req_id": 1 00:32:01.553 } 00:32:01.553 Got JSON-RPC error response 00:32:01.553 response: 00:32:01.553 { 00:32:01.553 "code": -17, 00:32:01.553 "message": "File exists" 00:32:01.553 } 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:01.553 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.811 11:44:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:02.746 [2024-11-02 11:44:02.999070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:02.746 [2024-11-02 11:44:02.999119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2271db0 with addr=10.0.0.2, port=8010 00:32:02.746 [2024-11-02 11:44:02.999147] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:02.746 [2024-11-02 11:44:02.999162] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:02.746 [2024-11-02 11:44:02.999175] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:03.680 [2024-11-02 11:44:04.001498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.680 [2024-11-02 11:44:04.001534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2271db0 with addr=10.0.0.2, port=8010 00:32:03.680 [2024-11-02 11:44:04.001574] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:03.680 [2024-11-02 11:44:04.001588] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:03.680 [2024-11-02 11:44:04.001601] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:04.615 [2024-11-02 11:44:05.003694] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:04.615 request: 00:32:04.615 { 00:32:04.615 "name": "nvme_second", 00:32:04.615 "trtype": "tcp", 00:32:04.615 "traddr": "10.0.0.2", 00:32:04.615 "adrfam": "ipv4", 00:32:04.615 "trsvcid": "8010", 00:32:04.615 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:04.615 "wait_for_attach": false, 00:32:04.615 "attach_timeout_ms": 3000, 00:32:04.615 "method": "bdev_nvme_start_discovery", 00:32:04.615 "req_id": 1 00:32:04.615 } 00:32:04.615 Got JSON-RPC error response 00:32:04.615 response: 00:32:04.615 { 00:32:04.615 "code": -110, 00:32:04.615 "message": "Connection timed out" 00:32:04.615 } 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:04.615 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3944546 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.874 rmmod nvme_tcp 00:32:04.874 rmmod nvme_fabrics 00:32:04.874 rmmod nvme_keyring 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3944415 ']' 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3944415 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3944415 ']' 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3944415 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3944415 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3944415' 00:32:04.874 killing process with pid 3944415 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3944415 00:32:04.874 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3944415 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.133 11:44:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.036 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:07.036 00:32:07.036 real 0m13.010s 00:32:07.036 user 0m18.913s 00:32:07.036 sys 0m2.666s 00:32:07.036 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:07.036 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:07.036 ************************************ 00:32:07.036 END TEST nvmf_host_discovery 00:32:07.036 ************************************ 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.295 ************************************ 00:32:07.295 START TEST nvmf_host_multipath_status 00:32:07.295 ************************************ 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:07.295 * Looking for test storage... 00:32:07.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:07.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.295 --rc genhtml_branch_coverage=1 00:32:07.295 --rc genhtml_function_coverage=1 00:32:07.295 --rc genhtml_legend=1 00:32:07.295 --rc geninfo_all_blocks=1 00:32:07.295 --rc geninfo_unexecuted_blocks=1 00:32:07.295 00:32:07.295 ' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:07.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.295 --rc genhtml_branch_coverage=1 00:32:07.295 --rc genhtml_function_coverage=1 00:32:07.295 --rc genhtml_legend=1 00:32:07.295 --rc geninfo_all_blocks=1 00:32:07.295 --rc geninfo_unexecuted_blocks=1 00:32:07.295 00:32:07.295 ' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:07.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.295 --rc genhtml_branch_coverage=1 00:32:07.295 --rc genhtml_function_coverage=1 00:32:07.295 --rc genhtml_legend=1 00:32:07.295 --rc geninfo_all_blocks=1 00:32:07.295 --rc geninfo_unexecuted_blocks=1 00:32:07.295 00:32:07.295 ' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:07.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.295 --rc genhtml_branch_coverage=1 00:32:07.295 --rc genhtml_function_coverage=1 00:32:07.295 --rc genhtml_legend=1 00:32:07.295 --rc geninfo_all_blocks=1 00:32:07.295 --rc geninfo_unexecuted_blocks=1 00:32:07.295 00:32:07.295 ' 00:32:07.295 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:07.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.296 11:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:09.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:09.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:09.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.831 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:09.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:32:09.832 00:32:09.832 --- 10.0.0.2 ping statistics --- 00:32:09.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.832 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:32:09.832 00:32:09.832 --- 10.0.0.1 ping statistics --- 00:32:09.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.832 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3947588 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3947588 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3947588 ']' 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:09.832 11:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:09.832 [2024-11-02 11:44:09.901587] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:32:09.832 [2024-11-02 11:44:09.901675] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.832 [2024-11-02 11:44:09.979105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:09.832 [2024-11-02 11:44:10.036578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.832 [2024-11-02 11:44:10.036636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.832 [2024-11-02 11:44:10.036651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.832 [2024-11-02 11:44:10.036662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.832 [2024-11-02 11:44:10.036673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.832 [2024-11-02 11:44:10.038171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.832 [2024-11-02 11:44:10.038176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3947588 00:32:09.832 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:10.091 [2024-11-02 11:44:10.428052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.091 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:10.658 Malloc0 00:32:10.658 11:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:10.658 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.916 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.174 [2024-11-02 11:44:11.562608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.432 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:11.432 [2024-11-02 11:44:11.831335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3947832 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3947832 /var/tmp/bdevperf.sock 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3947832 ']' 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:11.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:11.691 11:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:11.949 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:11.949 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:32:11.949 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:12.207 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:12.465 Nvme0n1 00:32:12.465 11:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:13.031 Nvme0n1 00:32:13.031 11:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:13.031 11:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:14.938 11:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:14.938 11:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:15.196 11:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:15.456 11:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:16.393 11:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:16.393 11:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:16.393 11:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.393 11:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.994 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.275 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.275 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.275 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.275 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.850 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.850 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:17.850 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.850 11:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.850 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.850 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:17.850 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.850 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.418 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.418 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:18.418 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:18.418 11:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:18.988 11:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:19.926 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:19.926 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:19.926 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.926 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:20.185 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:20.185 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:20.185 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.185 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:20.444 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.444 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:20.444 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.444 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:20.702 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.702 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:20.702 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.702 11:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.960 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.960 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.960 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.960 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:21.218 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.218 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:21.218 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.218 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:21.477 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.477 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:21.477 11:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:21.735 11:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:21.995 11:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:22.931 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:22.931 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:22.931 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.931 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.497 11:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:23.755 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.755 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:23.755 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.755 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.324 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:24.893 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.893 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:24.893 11:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:24.893 11:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:25.461 11:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:26.397 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:26.397 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:26.397 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.397 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:26.656 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:26.656 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:26.656 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.656 11:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:26.914 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:26.914 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:26.914 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:26.914 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:27.172 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.172 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:27.172 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:27.172 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.429 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.429 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:27.429 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.429 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.688 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.688 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:27.688 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.688 11:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:27.946 11:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.946 11:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:27.946 11:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:28.204 11:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:28.465 11:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:29.847 11:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:29.847 11:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:29.847 11:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.847 11:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.847 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:29.847 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:29.847 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.847 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:30.105 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.105 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:30.105 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.106 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:30.363 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.363 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:30.364 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.364 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:30.622 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.622 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:30.622 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.622 11:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.882 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.882 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:30.882 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.882 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:31.141 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:31.141 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:31.141 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:31.399 11:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:31.657 11:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.036 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:33.294 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.294 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:33.294 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.294 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:33.552 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.552 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:33.552 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.552 11:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:33.809 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.809 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:33.809 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.809 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:34.067 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:34.067 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:34.067 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.067 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:34.325 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.325 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:34.584 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:34.584 11:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:35.153 11:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:35.153 11:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.533 11:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:36.791 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.791 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:36.791 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.791 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:37.050 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.050 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:37.050 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.050 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:37.308 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.308 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:37.308 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.308 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:37.567 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.567 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:37.567 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.567 11:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:37.825 11:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.825 11:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:37.825 11:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:38.084 11:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:38.344 11:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:39.723 11:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:39.723 11:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:39.723 11:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.723 11:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:39.723 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.723 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:39.723 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.723 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:39.981 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.981 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:39.981 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.981 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:40.240 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.240 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:40.240 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.240 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:40.498 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.498 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:40.498 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.498 11:44:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:40.756 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.756 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:40.756 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.756 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:41.325 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.325 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:41.325 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:41.325 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:41.585 11:44:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:42.966 11:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:42.966 11:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:42.966 11:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.966 11:44:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:42.966 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.966 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:42.966 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.966 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:43.224 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.224 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:43.224 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.224 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:43.482 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.482 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:43.482 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.482 11:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:43.741 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.741 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:43.741 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.741 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:43.999 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.999 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:43.999 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.999 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:44.566 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.566 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:44.566 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:44.566 11:44:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:44.824 11:44:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.203 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:46.461 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.461 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:46.461 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.461 11:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:46.720 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.720 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:46.720 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.720 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:46.978 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.978 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:46.978 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.978 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:47.236 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.236 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:47.236 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.236 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3947832 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3947832 ']' 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3947832 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3947832 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3947832' 00:32:47.504 killing process with pid 3947832 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3947832 00:32:47.504 11:44:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3947832 00:32:47.822 { 00:32:47.822 "results": [ 00:32:47.822 { 00:32:47.822 "job": "Nvme0n1", 00:32:47.822 "core_mask": "0x4", 00:32:47.822 "workload": "verify", 00:32:47.822 "status": "terminated", 00:32:47.822 "verify_range": { 00:32:47.822 "start": 0, 00:32:47.822 "length": 16384 00:32:47.822 }, 00:32:47.822 "queue_depth": 128, 00:32:47.822 "io_size": 4096, 00:32:47.822 "runtime": 34.474833, 00:32:47.822 "iops": 7784.316170581595, 00:32:47.822 "mibps": 30.407485041334354, 00:32:47.822 "io_failed": 0, 00:32:47.822 "io_timeout": 0, 00:32:47.822 "avg_latency_us": 16417.391081118567, 00:32:47.822 "min_latency_us": 239.69185185185185, 00:32:47.822 "max_latency_us": 4026531.84 00:32:47.822 } 00:32:47.822 ], 00:32:47.822 "core_count": 1 00:32:47.822 } 00:32:47.823 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3947832 00:32:47.823 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.823 [2024-11-02 11:44:11.896598] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:32:47.823 [2024-11-02 11:44:11.896693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947832 ] 00:32:47.823 [2024-11-02 11:44:11.965606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.823 [2024-11-02 11:44:12.016808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.823 Running I/O for 90 seconds... 00:32:47.823 8364.00 IOPS, 32.67 MiB/s [2024-11-02T10:44:48.225Z] 8471.00 IOPS, 33.09 MiB/s [2024-11-02T10:44:48.225Z] 8503.33 IOPS, 33.22 MiB/s [2024-11-02T10:44:48.225Z] 8506.50 IOPS, 33.23 MiB/s [2024-11-02T10:44:48.225Z] 8500.20 IOPS, 33.20 MiB/s [2024-11-02T10:44:48.225Z] 8463.50 IOPS, 33.06 MiB/s [2024-11-02T10:44:48.225Z] 8400.71 IOPS, 32.82 MiB/s [2024-11-02T10:44:48.225Z] 8363.88 IOPS, 32.67 MiB/s [2024-11-02T10:44:48.225Z] 8321.89 IOPS, 32.51 MiB/s [2024-11-02T10:44:48.225Z] 8327.50 IOPS, 32.53 MiB/s [2024-11-02T10:44:48.225Z] 8332.64 IOPS, 32.55 MiB/s [2024-11-02T10:44:48.225Z] 8335.83 IOPS, 32.56 MiB/s [2024-11-02T10:44:48.225Z] 8344.69 IOPS, 32.60 MiB/s [2024-11-02T10:44:48.225Z] 8357.57 IOPS, 32.65 MiB/s [2024-11-02T10:44:48.225Z] 8359.80 IOPS, 32.66 MiB/s [2024-11-02T10:44:48.225Z] [2024-11-02 11:44:28.504300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.504972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.504988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.823 [2024-11-02 11:44:28.505027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.823 [2024-11-02 11:44:28.505065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.823 [2024-11-02 11:44:28.505581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.505629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.505671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.505713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.505761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.505804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.823 [2024-11-02 11:44:28.505828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.823 [2024-11-02 11:44:28.505845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.505885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.505902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.505925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.505957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.505980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.506967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.506984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.507008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.507025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.507051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.507068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.507092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.507108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.507132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.507149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.507173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.824 [2024-11-02 11:44:28.507190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:47.824 [2024-11-02 11:44:28.507213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.507968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.507984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.825 [2024-11-02 11:44:28.508610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.825 [2024-11-02 11:44:28.508655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.825 [2024-11-02 11:44:28.508703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.825 [2024-11-02 11:44:28.508749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:47.825 [2024-11-02 11:44:28.508775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.825 [2024-11-02 11:44:28.508791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.508816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.826 [2024-11-02 11:44:28.508833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.508858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.826 [2024-11-02 11:44:28.508875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.508900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.826 [2024-11-02 11:44:28.508916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.508942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.508958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.509959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.509984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.510000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.510026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.510042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.510074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.510091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.826 [2024-11-02 11:44:28.510117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.826 [2024-11-02 11:44:28.510133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:28.510175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:28.510217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:28.510280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:28.510328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:28.510376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:28.510720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:28.510736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.827 7902.44 IOPS, 30.87 MiB/s [2024-11-02T10:44:48.229Z] 7437.59 IOPS, 29.05 MiB/s [2024-11-02T10:44:48.229Z] 7024.39 IOPS, 27.44 MiB/s [2024-11-02T10:44:48.229Z] 6654.68 IOPS, 25.99 MiB/s [2024-11-02T10:44:48.229Z] 6673.20 IOPS, 26.07 MiB/s [2024-11-02T10:44:48.229Z] 6731.95 IOPS, 26.30 MiB/s [2024-11-02T10:44:48.229Z] 6807.82 IOPS, 26.59 MiB/s [2024-11-02T10:44:48.229Z] 6986.78 IOPS, 27.29 MiB/s [2024-11-02T10:44:48.229Z] 7161.42 IOPS, 27.97 MiB/s [2024-11-02T10:44:48.229Z] 7307.76 IOPS, 28.55 MiB/s [2024-11-02T10:44:48.229Z] 7346.54 IOPS, 28.70 MiB/s [2024-11-02T10:44:48.229Z] 7367.04 IOPS, 28.78 MiB/s [2024-11-02T10:44:48.229Z] 7385.21 IOPS, 28.85 MiB/s [2024-11-02T10:44:48.229Z] 7447.14 IOPS, 29.09 MiB/s [2024-11-02T10:44:48.229Z] 7560.80 IOPS, 29.53 MiB/s [2024-11-02T10:44:48.229Z] 7668.74 IOPS, 29.96 MiB/s [2024-11-02T10:44:48.229Z] [2024-11-02 11:44:45.185905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.827 [2024-11-02 11:44:45.185990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.827 [2024-11-02 11:44:45.186692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.827 [2024-11-02 11:44:45.186714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.186741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.186781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.186802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.186842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.186858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.186881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.828 [2024-11-02 11:44:45.186898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.828 [2024-11-02 11:44:45.186937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.828 [2024-11-02 11:44:45.188779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.188952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.188990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.189007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.828 [2024-11-02 11:44:45.189045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.828 [2024-11-02 11:44:45.189063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.829 [2024-11-02 11:44:45.189656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.829 [2024-11-02 11:44:45.189697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.189940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.189965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.829 [2024-11-02 11:44:45.191743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.829 [2024-11-02 11:44:45.191765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.191797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.191821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.191837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.191860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.191891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.191915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.191933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.191961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.191979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.192019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.192842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.192978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.192995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.830 [2024-11-02 11:44:45.193324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.193366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.193407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.193447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.830 [2024-11-02 11:44:45.193488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.830 [2024-11-02 11:44:45.193511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.193567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.193584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.193624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.193642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.194624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.194981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.194998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.195020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.195036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.195057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.195072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.195098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.831 [2024-11-02 11:44:45.195114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:47.831 [2024-11-02 11:44:45.195136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.831 [2024-11-02 11:44:45.195151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.195188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.195225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.195573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.195591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.196750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.196955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.196976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.196993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.197148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.832 [2024-11-02 11:44:45.197716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.832 [2024-11-02 11:44:45.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.832 [2024-11-02 11:44:45.197757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.197794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.197812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.197835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.197851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.197873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.197890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.197911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.197927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.197949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.197966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.197993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.198011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.198033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.198050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.198071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.198087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.198110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.198127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.198148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.198165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.198203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.198220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.198269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.198290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.833 [2024-11-02 11:44:45.200860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:47.833 [2024-11-02 11:44:45.200881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.833 [2024-11-02 11:44:45.200901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.200924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.200940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.200962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.200979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.201018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.834 [2024-11-02 11:44:45.201055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.834 [2024-11-02 11:44:45.201095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.834 [2024-11-02 11:44:45.201132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.834 [2024-11-02 11:44:45.201170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.201207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.201229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.201266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.204968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.204984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.834 [2024-11-02 11:44:45.205351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.834 [2024-11-02 11:44:45.205369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.205421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.205943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.205976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.835 [2024-11-02 11:44:45.206493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.835 [2024-11-02 11:44:45.206733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.835 [2024-11-02 11:44:45.206753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.206769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.206790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.206805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.206825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.206841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.206876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.206898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.206914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.207876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.207899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.207924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.207946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.207970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.207986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.208022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.208436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.208853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.208903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.208946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.208969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.208986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.209041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.209096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.209135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.836 [2024-11-02 11:44:45.209228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.209292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.209337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.209376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.836 [2024-11-02 11:44:45.209416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.836 [2024-11-02 11:44:45.209444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.209462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.209485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.209517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.209542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.209580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.211895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.211969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.211991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.837 [2024-11-02 11:44:45.212233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.212298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.212355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:47.837 [2024-11-02 11:44:45.212378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.837 [2024-11-02 11:44:45.212396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.212437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.212478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.212519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.212578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.212632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.212671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.212713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.212751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.212788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.212825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.212846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.212863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.215950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.215975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.216021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.216077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.216476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.838 [2024-11-02 11:44:45.216703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.216744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.838 [2024-11-02 11:44:45.216801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:47.838 [2024-11-02 11:44:45.216823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.216849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.216874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.216892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.216915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.216932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.216955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.216973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.216996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.217094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.217321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.217362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.217543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.217608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.217949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.217970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.217985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.218007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.218022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.218044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.218060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.218082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.839 [2024-11-02 11:44:45.218098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.219492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.839 [2024-11-02 11:44:45.219516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.839 [2024-11-02 11:44:45.219544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.840 [2024-11-02 11:44:45.219570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.219884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.219901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.840 [2024-11-02 11:44:45.221565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.840 [2024-11-02 11:44:45.221629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.840 [2024-11-02 11:44:45.221669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.840 [2024-11-02 11:44:45.221709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.840 [2024-11-02 11:44:45.221928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.221967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.221989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.840 [2024-11-02 11:44:45.222005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.840 [2024-11-02 11:44:45.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.222198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.222314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.222470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.222643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.222666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.222684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.223624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.223666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.223703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.223764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.223819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.223859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.223916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.223956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.223979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.223997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.224039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.224080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.224121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.224161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.841 [2024-11-02 11:44:45.224219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.224880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.841 [2024-11-02 11:44:45.224931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:47.841 [2024-11-02 11:44:45.224957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.224975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.224998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.225904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.225963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.225979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.226021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.226059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.226098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.226136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.226174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.842 [2024-11-02 11:44:45.226229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:47.842 [2024-11-02 11:44:45.226795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.842 [2024-11-02 11:44:45.226820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.226847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.226866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.226890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.226908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.226931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.226949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.226972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.226989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.227046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.227091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.227131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.227170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.227210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.227277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.227321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.227362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.227404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.227427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.227445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.228974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.228999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.229175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.229431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.229470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.843 [2024-11-02 11:44:45.229570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.843 [2024-11-02 11:44:45.229662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:47.843 [2024-11-02 11:44:45.229687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.229705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.229742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.229779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.229816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.229854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.229891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.229928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.229965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.229986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.230002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.230023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.230039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.230060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.230076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.230097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.230113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.230139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.230156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.232844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.232868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.232896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.232933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.232958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.232991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.844 [2024-11-02 11:44:45.233349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.844 [2024-11-02 11:44:45.233779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:47.844 [2024-11-02 11:44:45.233804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.233821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.233844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.233861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.233883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.233900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.233923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.233948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.233972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.233989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.234140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.234180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.234217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.234419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.234528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.845 [2024-11-02 11:44:45.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.235758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.235782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.235836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.235857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.235882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.235924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.235941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.235964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.235981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.236005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.236022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.236046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.236063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.236085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.236118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.236141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.236157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.236194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.236210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:47.845 [2024-11-02 11:44:45.236231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.845 [2024-11-02 11:44:45.236247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.846 [2024-11-02 11:44:45.236317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.846 [2024-11-02 11:44:45.236355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.846 [2024-11-02 11:44:45.236393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.846 [2024-11-02 11:44:45.236431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.846 [2024-11-02 11:44:45.236468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.846 [2024-11-02 11:44:45.236506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.846 [2024-11-02 11:44:45.236545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.846 [2024-11-02 11:44:45.236598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.846 [2024-11-02 11:44:45.236650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.846 [2024-11-02 11:44:45.236689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:47.846 [2024-11-02 11:44:45.236711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.846 [2024-11-02 11:44:45.236727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:47.846 7742.91 IOPS, 30.25 MiB/s [2024-11-02T10:44:48.248Z] 7761.21 IOPS, 30.32 MiB/s [2024-11-02T10:44:48.248Z] 7778.00 IOPS, 30.38 MiB/s [2024-11-02T10:44:48.248Z] Received shutdown signal, test time was about 34.475651 seconds 00:32:47.846 00:32:47.846 Latency(us) 00:32:47.846 [2024-11-02T10:44:48.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.846 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:47.846 Verification LBA range: start 0x0 length 0x4000 00:32:47.846 Nvme0n1 : 34.47 7784.32 30.41 0.00 0.00 16417.39 239.69 4026531.84 00:32:47.846 [2024-11-02T10:44:48.248Z] =================================================================================================================== 00:32:47.846 [2024-11-02T10:44:48.248Z] Total : 7784.32 30.41 0.00 0.00 16417.39 239.69 4026531.84 00:32:47.846 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.115 rmmod nvme_tcp 00:32:48.115 rmmod nvme_fabrics 00:32:48.115 rmmod nvme_keyring 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3947588 ']' 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3947588 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3947588 ']' 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3947588 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3947588 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3947588' 00:32:48.115 killing process with pid 3947588 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3947588 00:32:48.115 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3947588 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.373 11:44:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.914 00:32:50.914 real 0m43.257s 00:32:50.914 user 2m12.277s 00:32:50.914 sys 0m10.618s 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.914 ************************************ 00:32:50.914 END TEST nvmf_host_multipath_status 00:32:50.914 ************************************ 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.914 ************************************ 00:32:50.914 START TEST nvmf_discovery_remove_ifc 00:32:50.914 ************************************ 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:50.914 * Looking for test storage... 00:32:50.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:50.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.914 --rc genhtml_branch_coverage=1 00:32:50.914 --rc genhtml_function_coverage=1 00:32:50.914 --rc genhtml_legend=1 00:32:50.914 --rc geninfo_all_blocks=1 00:32:50.914 --rc geninfo_unexecuted_blocks=1 00:32:50.914 00:32:50.914 ' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:50.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.914 --rc genhtml_branch_coverage=1 00:32:50.914 --rc genhtml_function_coverage=1 00:32:50.914 --rc genhtml_legend=1 00:32:50.914 --rc geninfo_all_blocks=1 00:32:50.914 --rc geninfo_unexecuted_blocks=1 00:32:50.914 00:32:50.914 ' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:50.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.914 --rc genhtml_branch_coverage=1 00:32:50.914 --rc genhtml_function_coverage=1 00:32:50.914 --rc genhtml_legend=1 00:32:50.914 --rc geninfo_all_blocks=1 00:32:50.914 --rc geninfo_unexecuted_blocks=1 00:32:50.914 00:32:50.914 ' 00:32:50.914 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:50.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.915 --rc genhtml_branch_coverage=1 00:32:50.915 --rc genhtml_function_coverage=1 00:32:50.915 --rc genhtml_legend=1 00:32:50.915 --rc geninfo_all_blocks=1 00:32:50.915 --rc geninfo_unexecuted_blocks=1 00:32:50.915 00:32:50.915 ' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:50.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.915 11:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.837 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:52.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:52.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:52.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:52.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.838 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.839 11:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:32:52.839 00:32:52.839 --- 10.0.0.2 ping statistics --- 00:32:52.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.839 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:32:52.839 00:32:52.839 --- 10.0.0.1 ping statistics --- 00:32:52.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.839 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.839 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3954214 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3954214 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3954214 ']' 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:52.840 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:52.840 [2024-11-02 11:44:53.142605] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:32:52.840 [2024-11-02 11:44:53.142684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.840 [2024-11-02 11:44:53.215006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.103 [2024-11-02 11:44:53.259458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.103 [2024-11-02 11:44:53.259512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.103 [2024-11-02 11:44:53.259532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.103 [2024-11-02 11:44:53.259543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.103 [2024-11-02 11:44:53.259552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.103 [2024-11-02 11:44:53.260098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.103 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:53.103 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:32:53.103 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.103 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:53.103 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.104 [2024-11-02 11:44:53.409714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.104 [2024-11-02 11:44:53.417939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:53.104 null0 00:32:53.104 [2024-11-02 11:44:53.449877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3954234 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3954234 /tmp/host.sock 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3954234 ']' 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:53.104 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:53.104 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.362 [2024-11-02 11:44:53.517863] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:32:53.362 [2024-11-02 11:44:53.517929] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954234 ] 00:32:53.362 [2024-11-02 11:44:53.588889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.362 [2024-11-02 11:44:53.638624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.621 11:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.003 [2024-11-02 11:44:54.968035] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:55.003 [2024-11-02 11:44:54.968072] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:55.003 [2024-11-02 11:44:54.968094] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:55.003 [2024-11-02 11:44:55.095517] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:55.003 [2024-11-02 11:44:55.197443] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:55.003 [2024-11-02 11:44:55.198534] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa49310:1 started. 00:32:55.003 [2024-11-02 11:44:55.200302] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:55.003 [2024-11-02 11:44:55.200363] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:55.004 [2024-11-02 11:44:55.200396] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:55.004 [2024-11-02 11:44:55.200420] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:55.004 [2024-11-02 11:44:55.200458] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:55.004 [2024-11-02 11:44:55.206289] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa49310 was disconnected and freed. delete nvme_qpair. 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:55.004 11:44:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:56.384 11:44:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.323 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:57.324 11:44:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:58.262 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:58.262 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.262 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.262 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:58.262 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:58.263 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:58.263 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:58.263 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.263 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:58.263 11:44:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:59.203 11:44:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:00.139 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:00.398 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.398 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:00.398 11:45:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:00.398 [2024-11-02 11:45:00.641843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:00.398 [2024-11-02 11:45:00.641915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.398 [2024-11-02 11:45:00.641939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.398 [2024-11-02 11:45:00.641969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.398 [2024-11-02 11:45:00.641983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.398 [2024-11-02 11:45:00.641999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.398 [2024-11-02 11:45:00.642014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.398 [2024-11-02 11:45:00.642029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.398 [2024-11-02 11:45:00.642044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.398 [2024-11-02 11:45:00.642070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.398 [2024-11-02 11:45:00.642086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.398 [2024-11-02 11:45:00.642101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25bc0 is same with the state(6) to be set 00:33:00.398 [2024-11-02 11:45:00.651864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa25bc0 (9): Bad file descriptor 00:33:00.398 [2024-11-02 11:45:00.661912] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.398 [2024-11-02 11:45:00.661939] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.398 [2024-11-02 11:45:00.661951] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.398 [2024-11-02 11:45:00.661962] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.398 [2024-11-02 11:45:00.662004] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:01.335 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:01.335 [2024-11-02 11:45:01.695348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:01.335 [2024-11-02 11:45:01.695415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa25bc0 with addr=10.0.0.2, port=4420 00:33:01.335 [2024-11-02 11:45:01.695440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25bc0 is same with the state(6) to be set 00:33:01.335 [2024-11-02 11:45:01.695487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa25bc0 (9): Bad file descriptor 00:33:01.335 [2024-11-02 11:45:01.695957] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:01.335 [2024-11-02 11:45:01.695999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:01.335 [2024-11-02 11:45:01.696015] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:01.335 [2024-11-02 11:45:01.696031] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:01.335 [2024-11-02 11:45:01.696045] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:01.336 [2024-11-02 11:45:01.696056] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:01.336 [2024-11-02 11:45:01.696081] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:01.336 [2024-11-02 11:45:01.696099] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:01.336 [2024-11-02 11:45:01.696108] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:01.336 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.336 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:01.336 11:45:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:02.718 [2024-11-02 11:45:02.698603] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:02.718 [2024-11-02 11:45:02.698656] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:02.718 [2024-11-02 11:45:02.698686] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:02.718 [2024-11-02 11:45:02.698701] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:02.718 [2024-11-02 11:45:02.698716] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:02.718 [2024-11-02 11:45:02.698731] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:02.718 [2024-11-02 11:45:02.698744] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:02.718 [2024-11-02 11:45:02.698770] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:02.718 [2024-11-02 11:45:02.698817] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:02.718 [2024-11-02 11:45:02.698866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.718 [2024-11-02 11:45:02.698890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.718 [2024-11-02 11:45:02.698914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.718 [2024-11-02 11:45:02.698930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.718 [2024-11-02 11:45:02.698947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.718 [2024-11-02 11:45:02.698964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.718 [2024-11-02 11:45:02.698980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.718 [2024-11-02 11:45:02.698996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.718 [2024-11-02 11:45:02.699013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.718 [2024-11-02 11:45:02.699028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.718 [2024-11-02 11:45:02.699043] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:02.718 [2024-11-02 11:45:02.699100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa152d0 (9): Bad file descriptor 00:33:02.718 [2024-11-02 11:45:02.700087] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:02.718 [2024-11-02 11:45:02.700115] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:02.718 11:45:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:03.654 11:45:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:04.591 [2024-11-02 11:45:04.755452] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:04.591 [2024-11-02 11:45:04.755482] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:04.591 [2024-11-02 11:45:04.755509] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:04.591 [2024-11-02 11:45:04.842802] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:04.591 11:45:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:04.852 [2024-11-02 11:45:05.023026] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:04.852 [2024-11-02 11:45:05.023887] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xa214e0:1 started. 00:33:04.852 [2024-11-02 11:45:05.025365] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:04.852 [2024-11-02 11:45:05.025407] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:04.852 [2024-11-02 11:45:05.025436] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:04.852 [2024-11-02 11:45:05.025458] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:04.852 [2024-11-02 11:45:05.025470] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:04.852 [2024-11-02 11:45:05.032559] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xa214e0 was disconnected and freed. delete nvme_qpair. 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3954234 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3954234 ']' 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3954234 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:05.787 11:45:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3954234 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3954234' 00:33:05.787 killing process with pid 3954234 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3954234 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3954234 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.787 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.787 rmmod nvme_tcp 00:33:06.046 rmmod nvme_fabrics 00:33:06.046 rmmod nvme_keyring 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3954214 ']' 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3954214 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3954214 ']' 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3954214 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3954214 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3954214' 00:33:06.046 killing process with pid 3954214 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3954214 00:33:06.046 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3954214 00:33:06.306 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:06.306 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:06.306 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.307 11:45:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.213 00:33:08.213 real 0m17.726s 00:33:08.213 user 0m25.922s 00:33:08.213 sys 0m2.832s 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.213 ************************************ 00:33:08.213 END TEST nvmf_discovery_remove_ifc 00:33:08.213 ************************************ 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.213 ************************************ 00:33:08.213 START TEST nvmf_identify_kernel_target 00:33:08.213 ************************************ 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:08.213 * Looking for test storage... 00:33:08.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:08.213 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:08.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.472 --rc genhtml_branch_coverage=1 00:33:08.472 --rc genhtml_function_coverage=1 00:33:08.472 --rc genhtml_legend=1 00:33:08.472 --rc geninfo_all_blocks=1 00:33:08.472 --rc geninfo_unexecuted_blocks=1 00:33:08.472 00:33:08.472 ' 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:08.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.472 --rc genhtml_branch_coverage=1 00:33:08.472 --rc genhtml_function_coverage=1 00:33:08.472 --rc genhtml_legend=1 00:33:08.472 --rc geninfo_all_blocks=1 00:33:08.472 --rc geninfo_unexecuted_blocks=1 00:33:08.472 00:33:08.472 ' 00:33:08.472 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:08.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.472 --rc genhtml_branch_coverage=1 00:33:08.472 --rc genhtml_function_coverage=1 00:33:08.472 --rc genhtml_legend=1 00:33:08.472 --rc geninfo_all_blocks=1 00:33:08.472 --rc geninfo_unexecuted_blocks=1 00:33:08.472 00:33:08.472 ' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:08.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.473 --rc genhtml_branch_coverage=1 00:33:08.473 --rc genhtml_function_coverage=1 00:33:08.473 --rc genhtml_legend=1 00:33:08.473 --rc geninfo_all_blocks=1 00:33:08.473 --rc geninfo_unexecuted_blocks=1 00:33:08.473 00:33:08.473 ' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.473 11:45:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:10.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:10.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:10.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.374 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:10.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.375 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:33:10.635 00:33:10.635 --- 10.0.0.2 ping statistics --- 00:33:10.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.635 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:33:10.635 00:33:10.635 --- 10.0.0.1 ping statistics --- 00:33:10.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.635 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:10.635 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:10.636 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:10.636 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:10.636 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:10.636 11:45:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:11.575 Waiting for block devices as requested 00:33:11.832 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:11.832 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:12.090 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:12.090 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:12.090 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:12.090 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:12.347 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:12.347 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:12.347 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:12.347 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:12.606 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:12.606 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:12.606 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:12.606 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:12.864 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:12.865 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:12.865 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:12.865 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:12.865 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:12.865 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:12.865 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:13.125 No valid GPT data, bailing 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:13.125 00:33:13.125 Discovery Log Number of Records 2, Generation counter 2 00:33:13.125 =====Discovery Log Entry 0====== 00:33:13.125 trtype: tcp 00:33:13.125 adrfam: ipv4 00:33:13.125 subtype: current discovery subsystem 00:33:13.125 treq: not specified, sq flow control disable supported 00:33:13.125 portid: 1 00:33:13.125 trsvcid: 4420 00:33:13.125 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:13.125 traddr: 10.0.0.1 00:33:13.125 eflags: none 00:33:13.125 sectype: none 00:33:13.125 =====Discovery Log Entry 1====== 00:33:13.125 trtype: tcp 00:33:13.125 adrfam: ipv4 00:33:13.125 subtype: nvme subsystem 00:33:13.125 treq: not specified, sq flow control disable supported 00:33:13.125 portid: 1 00:33:13.125 trsvcid: 4420 00:33:13.125 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:13.125 traddr: 10.0.0.1 00:33:13.125 eflags: none 00:33:13.125 sectype: none 00:33:13.125 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:13.125 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:13.387 ===================================================== 00:33:13.388 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:13.388 ===================================================== 00:33:13.388 Controller Capabilities/Features 00:33:13.388 ================================ 00:33:13.388 Vendor ID: 0000 00:33:13.388 Subsystem Vendor ID: 0000 00:33:13.388 Serial Number: bae681387bf8757891c3 00:33:13.388 Model Number: Linux 00:33:13.388 Firmware Version: 6.8.9-20 00:33:13.388 Recommended Arb Burst: 0 00:33:13.388 IEEE OUI Identifier: 00 00 00 00:33:13.388 Multi-path I/O 00:33:13.388 May have multiple subsystem ports: No 00:33:13.388 May have multiple controllers: No 00:33:13.388 Associated with SR-IOV VF: No 00:33:13.388 Max Data Transfer Size: Unlimited 00:33:13.388 Max Number of Namespaces: 0 00:33:13.388 Max Number of I/O Queues: 1024 00:33:13.388 NVMe Specification Version (VS): 1.3 00:33:13.388 NVMe Specification Version (Identify): 1.3 00:33:13.388 Maximum Queue Entries: 1024 00:33:13.388 Contiguous Queues Required: No 00:33:13.388 Arbitration Mechanisms Supported 00:33:13.388 Weighted Round Robin: Not Supported 00:33:13.388 Vendor Specific: Not Supported 00:33:13.388 Reset Timeout: 7500 ms 00:33:13.388 Doorbell Stride: 4 bytes 00:33:13.388 NVM Subsystem Reset: Not Supported 00:33:13.388 Command Sets Supported 00:33:13.388 NVM Command Set: Supported 00:33:13.388 Boot Partition: Not Supported 00:33:13.388 Memory Page Size Minimum: 4096 bytes 00:33:13.388 Memory Page Size Maximum: 4096 bytes 00:33:13.388 Persistent Memory Region: Not Supported 00:33:13.388 Optional Asynchronous Events Supported 00:33:13.388 Namespace Attribute Notices: Not Supported 00:33:13.388 Firmware Activation Notices: Not Supported 00:33:13.388 ANA Change Notices: Not Supported 00:33:13.388 PLE Aggregate Log Change Notices: Not Supported 00:33:13.388 LBA Status Info Alert Notices: Not Supported 00:33:13.388 EGE Aggregate Log Change Notices: Not Supported 00:33:13.388 Normal NVM Subsystem Shutdown event: Not Supported 00:33:13.388 Zone Descriptor Change Notices: Not Supported 00:33:13.388 Discovery Log Change Notices: Supported 00:33:13.388 Controller Attributes 00:33:13.388 128-bit Host Identifier: Not Supported 00:33:13.388 Non-Operational Permissive Mode: Not Supported 00:33:13.388 NVM Sets: Not Supported 00:33:13.388 Read Recovery Levels: Not Supported 00:33:13.388 Endurance Groups: Not Supported 00:33:13.388 Predictable Latency Mode: Not Supported 00:33:13.388 Traffic Based Keep ALive: Not Supported 00:33:13.388 Namespace Granularity: Not Supported 00:33:13.388 SQ Associations: Not Supported 00:33:13.388 UUID List: Not Supported 00:33:13.388 Multi-Domain Subsystem: Not Supported 00:33:13.388 Fixed Capacity Management: Not Supported 00:33:13.388 Variable Capacity Management: Not Supported 00:33:13.388 Delete Endurance Group: Not Supported 00:33:13.388 Delete NVM Set: Not Supported 00:33:13.388 Extended LBA Formats Supported: Not Supported 00:33:13.388 Flexible Data Placement Supported: Not Supported 00:33:13.388 00:33:13.388 Controller Memory Buffer Support 00:33:13.388 ================================ 00:33:13.388 Supported: No 00:33:13.388 00:33:13.388 Persistent Memory Region Support 00:33:13.388 ================================ 00:33:13.388 Supported: No 00:33:13.388 00:33:13.388 Admin Command Set Attributes 00:33:13.388 ============================ 00:33:13.388 Security Send/Receive: Not Supported 00:33:13.388 Format NVM: Not Supported 00:33:13.388 Firmware Activate/Download: Not Supported 00:33:13.388 Namespace Management: Not Supported 00:33:13.388 Device Self-Test: Not Supported 00:33:13.388 Directives: Not Supported 00:33:13.388 NVMe-MI: Not Supported 00:33:13.388 Virtualization Management: Not Supported 00:33:13.388 Doorbell Buffer Config: Not Supported 00:33:13.388 Get LBA Status Capability: Not Supported 00:33:13.388 Command & Feature Lockdown Capability: Not Supported 00:33:13.388 Abort Command Limit: 1 00:33:13.388 Async Event Request Limit: 1 00:33:13.388 Number of Firmware Slots: N/A 00:33:13.388 Firmware Slot 1 Read-Only: N/A 00:33:13.388 Firmware Activation Without Reset: N/A 00:33:13.388 Multiple Update Detection Support: N/A 00:33:13.388 Firmware Update Granularity: No Information Provided 00:33:13.388 Per-Namespace SMART Log: No 00:33:13.388 Asymmetric Namespace Access Log Page: Not Supported 00:33:13.388 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:13.388 Command Effects Log Page: Not Supported 00:33:13.388 Get Log Page Extended Data: Supported 00:33:13.388 Telemetry Log Pages: Not Supported 00:33:13.388 Persistent Event Log Pages: Not Supported 00:33:13.388 Supported Log Pages Log Page: May Support 00:33:13.388 Commands Supported & Effects Log Page: Not Supported 00:33:13.388 Feature Identifiers & Effects Log Page:May Support 00:33:13.388 NVMe-MI Commands & Effects Log Page: May Support 00:33:13.388 Data Area 4 for Telemetry Log: Not Supported 00:33:13.388 Error Log Page Entries Supported: 1 00:33:13.388 Keep Alive: Not Supported 00:33:13.388 00:33:13.388 NVM Command Set Attributes 00:33:13.388 ========================== 00:33:13.388 Submission Queue Entry Size 00:33:13.388 Max: 1 00:33:13.388 Min: 1 00:33:13.388 Completion Queue Entry Size 00:33:13.388 Max: 1 00:33:13.388 Min: 1 00:33:13.388 Number of Namespaces: 0 00:33:13.388 Compare Command: Not Supported 00:33:13.388 Write Uncorrectable Command: Not Supported 00:33:13.388 Dataset Management Command: Not Supported 00:33:13.388 Write Zeroes Command: Not Supported 00:33:13.388 Set Features Save Field: Not Supported 00:33:13.388 Reservations: Not Supported 00:33:13.388 Timestamp: Not Supported 00:33:13.388 Copy: Not Supported 00:33:13.388 Volatile Write Cache: Not Present 00:33:13.388 Atomic Write Unit (Normal): 1 00:33:13.388 Atomic Write Unit (PFail): 1 00:33:13.388 Atomic Compare & Write Unit: 1 00:33:13.388 Fused Compare & Write: Not Supported 00:33:13.388 Scatter-Gather List 00:33:13.388 SGL Command Set: Supported 00:33:13.388 SGL Keyed: Not Supported 00:33:13.388 SGL Bit Bucket Descriptor: Not Supported 00:33:13.388 SGL Metadata Pointer: Not Supported 00:33:13.388 Oversized SGL: Not Supported 00:33:13.388 SGL Metadata Address: Not Supported 00:33:13.388 SGL Offset: Supported 00:33:13.388 Transport SGL Data Block: Not Supported 00:33:13.388 Replay Protected Memory Block: Not Supported 00:33:13.388 00:33:13.388 Firmware Slot Information 00:33:13.388 ========================= 00:33:13.388 Active slot: 0 00:33:13.388 00:33:13.388 00:33:13.388 Error Log 00:33:13.388 ========= 00:33:13.388 00:33:13.388 Active Namespaces 00:33:13.388 ================= 00:33:13.388 Discovery Log Page 00:33:13.388 ================== 00:33:13.388 Generation Counter: 2 00:33:13.388 Number of Records: 2 00:33:13.388 Record Format: 0 00:33:13.388 00:33:13.388 Discovery Log Entry 0 00:33:13.388 ---------------------- 00:33:13.388 Transport Type: 3 (TCP) 00:33:13.388 Address Family: 1 (IPv4) 00:33:13.388 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:13.388 Entry Flags: 00:33:13.388 Duplicate Returned Information: 0 00:33:13.388 Explicit Persistent Connection Support for Discovery: 0 00:33:13.388 Transport Requirements: 00:33:13.388 Secure Channel: Not Specified 00:33:13.388 Port ID: 1 (0x0001) 00:33:13.388 Controller ID: 65535 (0xffff) 00:33:13.388 Admin Max SQ Size: 32 00:33:13.388 Transport Service Identifier: 4420 00:33:13.388 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:13.388 Transport Address: 10.0.0.1 00:33:13.388 Discovery Log Entry 1 00:33:13.388 ---------------------- 00:33:13.388 Transport Type: 3 (TCP) 00:33:13.388 Address Family: 1 (IPv4) 00:33:13.388 Subsystem Type: 2 (NVM Subsystem) 00:33:13.388 Entry Flags: 00:33:13.388 Duplicate Returned Information: 0 00:33:13.388 Explicit Persistent Connection Support for Discovery: 0 00:33:13.388 Transport Requirements: 00:33:13.388 Secure Channel: Not Specified 00:33:13.388 Port ID: 1 (0x0001) 00:33:13.388 Controller ID: 65535 (0xffff) 00:33:13.388 Admin Max SQ Size: 32 00:33:13.388 Transport Service Identifier: 4420 00:33:13.388 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:13.388 Transport Address: 10.0.0.1 00:33:13.388 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:13.388 get_feature(0x01) failed 00:33:13.388 get_feature(0x02) failed 00:33:13.388 get_feature(0x04) failed 00:33:13.388 ===================================================== 00:33:13.388 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:13.388 ===================================================== 00:33:13.389 Controller Capabilities/Features 00:33:13.389 ================================ 00:33:13.389 Vendor ID: 0000 00:33:13.389 Subsystem Vendor ID: 0000 00:33:13.389 Serial Number: 67729dcba8541fce0ed2 00:33:13.389 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:13.389 Firmware Version: 6.8.9-20 00:33:13.389 Recommended Arb Burst: 6 00:33:13.389 IEEE OUI Identifier: 00 00 00 00:33:13.389 Multi-path I/O 00:33:13.389 May have multiple subsystem ports: Yes 00:33:13.389 May have multiple controllers: Yes 00:33:13.389 Associated with SR-IOV VF: No 00:33:13.389 Max Data Transfer Size: Unlimited 00:33:13.389 Max Number of Namespaces: 1024 00:33:13.389 Max Number of I/O Queues: 128 00:33:13.389 NVMe Specification Version (VS): 1.3 00:33:13.389 NVMe Specification Version (Identify): 1.3 00:33:13.389 Maximum Queue Entries: 1024 00:33:13.389 Contiguous Queues Required: No 00:33:13.389 Arbitration Mechanisms Supported 00:33:13.389 Weighted Round Robin: Not Supported 00:33:13.389 Vendor Specific: Not Supported 00:33:13.389 Reset Timeout: 7500 ms 00:33:13.389 Doorbell Stride: 4 bytes 00:33:13.389 NVM Subsystem Reset: Not Supported 00:33:13.389 Command Sets Supported 00:33:13.389 NVM Command Set: Supported 00:33:13.389 Boot Partition: Not Supported 00:33:13.389 Memory Page Size Minimum: 4096 bytes 00:33:13.389 Memory Page Size Maximum: 4096 bytes 00:33:13.389 Persistent Memory Region: Not Supported 00:33:13.389 Optional Asynchronous Events Supported 00:33:13.389 Namespace Attribute Notices: Supported 00:33:13.389 Firmware Activation Notices: Not Supported 00:33:13.389 ANA Change Notices: Supported 00:33:13.389 PLE Aggregate Log Change Notices: Not Supported 00:33:13.389 LBA Status Info Alert Notices: Not Supported 00:33:13.389 EGE Aggregate Log Change Notices: Not Supported 00:33:13.389 Normal NVM Subsystem Shutdown event: Not Supported 00:33:13.389 Zone Descriptor Change Notices: Not Supported 00:33:13.389 Discovery Log Change Notices: Not Supported 00:33:13.389 Controller Attributes 00:33:13.389 128-bit Host Identifier: Supported 00:33:13.389 Non-Operational Permissive Mode: Not Supported 00:33:13.389 NVM Sets: Not Supported 00:33:13.389 Read Recovery Levels: Not Supported 00:33:13.389 Endurance Groups: Not Supported 00:33:13.389 Predictable Latency Mode: Not Supported 00:33:13.389 Traffic Based Keep ALive: Supported 00:33:13.389 Namespace Granularity: Not Supported 00:33:13.389 SQ Associations: Not Supported 00:33:13.389 UUID List: Not Supported 00:33:13.389 Multi-Domain Subsystem: Not Supported 00:33:13.389 Fixed Capacity Management: Not Supported 00:33:13.389 Variable Capacity Management: Not Supported 00:33:13.389 Delete Endurance Group: Not Supported 00:33:13.389 Delete NVM Set: Not Supported 00:33:13.389 Extended LBA Formats Supported: Not Supported 00:33:13.389 Flexible Data Placement Supported: Not Supported 00:33:13.389 00:33:13.389 Controller Memory Buffer Support 00:33:13.389 ================================ 00:33:13.389 Supported: No 00:33:13.389 00:33:13.389 Persistent Memory Region Support 00:33:13.389 ================================ 00:33:13.389 Supported: No 00:33:13.389 00:33:13.389 Admin Command Set Attributes 00:33:13.389 ============================ 00:33:13.389 Security Send/Receive: Not Supported 00:33:13.389 Format NVM: Not Supported 00:33:13.389 Firmware Activate/Download: Not Supported 00:33:13.389 Namespace Management: Not Supported 00:33:13.389 Device Self-Test: Not Supported 00:33:13.389 Directives: Not Supported 00:33:13.389 NVMe-MI: Not Supported 00:33:13.389 Virtualization Management: Not Supported 00:33:13.389 Doorbell Buffer Config: Not Supported 00:33:13.389 Get LBA Status Capability: Not Supported 00:33:13.389 Command & Feature Lockdown Capability: Not Supported 00:33:13.389 Abort Command Limit: 4 00:33:13.389 Async Event Request Limit: 4 00:33:13.389 Number of Firmware Slots: N/A 00:33:13.389 Firmware Slot 1 Read-Only: N/A 00:33:13.389 Firmware Activation Without Reset: N/A 00:33:13.389 Multiple Update Detection Support: N/A 00:33:13.389 Firmware Update Granularity: No Information Provided 00:33:13.389 Per-Namespace SMART Log: Yes 00:33:13.389 Asymmetric Namespace Access Log Page: Supported 00:33:13.389 ANA Transition Time : 10 sec 00:33:13.389 00:33:13.389 Asymmetric Namespace Access Capabilities 00:33:13.389 ANA Optimized State : Supported 00:33:13.389 ANA Non-Optimized State : Supported 00:33:13.389 ANA Inaccessible State : Supported 00:33:13.389 ANA Persistent Loss State : Supported 00:33:13.389 ANA Change State : Supported 00:33:13.389 ANAGRPID is not changed : No 00:33:13.389 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:13.389 00:33:13.389 ANA Group Identifier Maximum : 128 00:33:13.389 Number of ANA Group Identifiers : 128 00:33:13.389 Max Number of Allowed Namespaces : 1024 00:33:13.389 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:13.389 Command Effects Log Page: Supported 00:33:13.389 Get Log Page Extended Data: Supported 00:33:13.389 Telemetry Log Pages: Not Supported 00:33:13.389 Persistent Event Log Pages: Not Supported 00:33:13.389 Supported Log Pages Log Page: May Support 00:33:13.389 Commands Supported & Effects Log Page: Not Supported 00:33:13.389 Feature Identifiers & Effects Log Page:May Support 00:33:13.389 NVMe-MI Commands & Effects Log Page: May Support 00:33:13.389 Data Area 4 for Telemetry Log: Not Supported 00:33:13.389 Error Log Page Entries Supported: 128 00:33:13.389 Keep Alive: Supported 00:33:13.389 Keep Alive Granularity: 1000 ms 00:33:13.389 00:33:13.389 NVM Command Set Attributes 00:33:13.389 ========================== 00:33:13.389 Submission Queue Entry Size 00:33:13.389 Max: 64 00:33:13.389 Min: 64 00:33:13.389 Completion Queue Entry Size 00:33:13.389 Max: 16 00:33:13.389 Min: 16 00:33:13.389 Number of Namespaces: 1024 00:33:13.389 Compare Command: Not Supported 00:33:13.389 Write Uncorrectable Command: Not Supported 00:33:13.389 Dataset Management Command: Supported 00:33:13.389 Write Zeroes Command: Supported 00:33:13.389 Set Features Save Field: Not Supported 00:33:13.389 Reservations: Not Supported 00:33:13.389 Timestamp: Not Supported 00:33:13.389 Copy: Not Supported 00:33:13.389 Volatile Write Cache: Present 00:33:13.389 Atomic Write Unit (Normal): 1 00:33:13.389 Atomic Write Unit (PFail): 1 00:33:13.389 Atomic Compare & Write Unit: 1 00:33:13.389 Fused Compare & Write: Not Supported 00:33:13.389 Scatter-Gather List 00:33:13.389 SGL Command Set: Supported 00:33:13.389 SGL Keyed: Not Supported 00:33:13.389 SGL Bit Bucket Descriptor: Not Supported 00:33:13.389 SGL Metadata Pointer: Not Supported 00:33:13.389 Oversized SGL: Not Supported 00:33:13.389 SGL Metadata Address: Not Supported 00:33:13.389 SGL Offset: Supported 00:33:13.389 Transport SGL Data Block: Not Supported 00:33:13.389 Replay Protected Memory Block: Not Supported 00:33:13.389 00:33:13.389 Firmware Slot Information 00:33:13.389 ========================= 00:33:13.389 Active slot: 0 00:33:13.389 00:33:13.389 Asymmetric Namespace Access 00:33:13.389 =========================== 00:33:13.389 Change Count : 0 00:33:13.389 Number of ANA Group Descriptors : 1 00:33:13.389 ANA Group Descriptor : 0 00:33:13.389 ANA Group ID : 1 00:33:13.389 Number of NSID Values : 1 00:33:13.389 Change Count : 0 00:33:13.389 ANA State : 1 00:33:13.389 Namespace Identifier : 1 00:33:13.389 00:33:13.389 Commands Supported and Effects 00:33:13.389 ============================== 00:33:13.389 Admin Commands 00:33:13.389 -------------- 00:33:13.389 Get Log Page (02h): Supported 00:33:13.389 Identify (06h): Supported 00:33:13.389 Abort (08h): Supported 00:33:13.389 Set Features (09h): Supported 00:33:13.389 Get Features (0Ah): Supported 00:33:13.389 Asynchronous Event Request (0Ch): Supported 00:33:13.389 Keep Alive (18h): Supported 00:33:13.389 I/O Commands 00:33:13.389 ------------ 00:33:13.389 Flush (00h): Supported 00:33:13.389 Write (01h): Supported LBA-Change 00:33:13.389 Read (02h): Supported 00:33:13.389 Write Zeroes (08h): Supported LBA-Change 00:33:13.389 Dataset Management (09h): Supported 00:33:13.389 00:33:13.389 Error Log 00:33:13.389 ========= 00:33:13.389 Entry: 0 00:33:13.389 Error Count: 0x3 00:33:13.389 Submission Queue Id: 0x0 00:33:13.389 Command Id: 0x5 00:33:13.389 Phase Bit: 0 00:33:13.389 Status Code: 0x2 00:33:13.389 Status Code Type: 0x0 00:33:13.389 Do Not Retry: 1 00:33:13.389 Error Location: 0x28 00:33:13.389 LBA: 0x0 00:33:13.389 Namespace: 0x0 00:33:13.389 Vendor Log Page: 0x0 00:33:13.389 ----------- 00:33:13.389 Entry: 1 00:33:13.389 Error Count: 0x2 00:33:13.389 Submission Queue Id: 0x0 00:33:13.389 Command Id: 0x5 00:33:13.390 Phase Bit: 0 00:33:13.390 Status Code: 0x2 00:33:13.390 Status Code Type: 0x0 00:33:13.390 Do Not Retry: 1 00:33:13.390 Error Location: 0x28 00:33:13.390 LBA: 0x0 00:33:13.390 Namespace: 0x0 00:33:13.390 Vendor Log Page: 0x0 00:33:13.390 ----------- 00:33:13.390 Entry: 2 00:33:13.390 Error Count: 0x1 00:33:13.390 Submission Queue Id: 0x0 00:33:13.390 Command Id: 0x4 00:33:13.390 Phase Bit: 0 00:33:13.390 Status Code: 0x2 00:33:13.390 Status Code Type: 0x0 00:33:13.390 Do Not Retry: 1 00:33:13.390 Error Location: 0x28 00:33:13.390 LBA: 0x0 00:33:13.390 Namespace: 0x0 00:33:13.390 Vendor Log Page: 0x0 00:33:13.390 00:33:13.390 Number of Queues 00:33:13.390 ================ 00:33:13.390 Number of I/O Submission Queues: 128 00:33:13.390 Number of I/O Completion Queues: 128 00:33:13.390 00:33:13.390 ZNS Specific Controller Data 00:33:13.390 ============================ 00:33:13.390 Zone Append Size Limit: 0 00:33:13.390 00:33:13.390 00:33:13.390 Active Namespaces 00:33:13.390 ================= 00:33:13.390 get_feature(0x05) failed 00:33:13.390 Namespace ID:1 00:33:13.390 Command Set Identifier: NVM (00h) 00:33:13.390 Deallocate: Supported 00:33:13.390 Deallocated/Unwritten Error: Not Supported 00:33:13.390 Deallocated Read Value: Unknown 00:33:13.390 Deallocate in Write Zeroes: Not Supported 00:33:13.390 Deallocated Guard Field: 0xFFFF 00:33:13.390 Flush: Supported 00:33:13.390 Reservation: Not Supported 00:33:13.390 Namespace Sharing Capabilities: Multiple Controllers 00:33:13.390 Size (in LBAs): 1953525168 (931GiB) 00:33:13.390 Capacity (in LBAs): 1953525168 (931GiB) 00:33:13.390 Utilization (in LBAs): 1953525168 (931GiB) 00:33:13.390 UUID: f2eba57f-06a9-41f6-b823-6db2d647ffc3 00:33:13.390 Thin Provisioning: Not Supported 00:33:13.390 Per-NS Atomic Units: Yes 00:33:13.390 Atomic Boundary Size (Normal): 0 00:33:13.390 Atomic Boundary Size (PFail): 0 00:33:13.390 Atomic Boundary Offset: 0 00:33:13.390 NGUID/EUI64 Never Reused: No 00:33:13.390 ANA group ID: 1 00:33:13.390 Namespace Write Protected: No 00:33:13.390 Number of LBA Formats: 1 00:33:13.390 Current LBA Format: LBA Format #00 00:33:13.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:13.390 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.390 rmmod nvme_tcp 00:33:13.390 rmmod nvme_fabrics 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.390 11:45:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.925 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.925 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:15.925 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:15.925 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:15.925 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.925 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:15.926 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:15.926 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.926 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:15.926 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:15.926 11:45:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:16.862 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:16.862 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:16.862 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:17.802 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:17.802 00:33:17.802 real 0m9.572s 00:33:17.802 user 0m2.139s 00:33:17.802 sys 0m3.453s 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.802 ************************************ 00:33:17.802 END TEST nvmf_identify_kernel_target 00:33:17.802 ************************************ 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.802 ************************************ 00:33:17.802 START TEST nvmf_auth_host 00:33:17.802 ************************************ 00:33:17.802 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:18.060 * Looking for test storage... 00:33:18.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:18.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.061 --rc genhtml_branch_coverage=1 00:33:18.061 --rc genhtml_function_coverage=1 00:33:18.061 --rc genhtml_legend=1 00:33:18.061 --rc geninfo_all_blocks=1 00:33:18.061 --rc geninfo_unexecuted_blocks=1 00:33:18.061 00:33:18.061 ' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:18.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.061 --rc genhtml_branch_coverage=1 00:33:18.061 --rc genhtml_function_coverage=1 00:33:18.061 --rc genhtml_legend=1 00:33:18.061 --rc geninfo_all_blocks=1 00:33:18.061 --rc geninfo_unexecuted_blocks=1 00:33:18.061 00:33:18.061 ' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:18.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.061 --rc genhtml_branch_coverage=1 00:33:18.061 --rc genhtml_function_coverage=1 00:33:18.061 --rc genhtml_legend=1 00:33:18.061 --rc geninfo_all_blocks=1 00:33:18.061 --rc geninfo_unexecuted_blocks=1 00:33:18.061 00:33:18.061 ' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:18.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.061 --rc genhtml_branch_coverage=1 00:33:18.061 --rc genhtml_function_coverage=1 00:33:18.061 --rc genhtml_legend=1 00:33:18.061 --rc geninfo_all_blocks=1 00:33:18.061 --rc geninfo_unexecuted_blocks=1 00:33:18.061 00:33:18.061 ' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:18.061 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.062 11:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:19.963 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:19.963 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:19.963 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:19.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:19.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:19.964 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:33:20.223 00:33:20.223 --- 10.0.0.2 ping statistics --- 00:33:20.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.223 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:33:20.223 00:33:20.223 --- 10.0.0.1 ping statistics --- 00:33:20.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.223 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3962058 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3962058 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3962058 ']' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:20.223 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.481 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a2584b9dea01dc9f7a136a797527e21f 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rNa 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a2584b9dea01dc9f7a136a797527e21f 0 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a2584b9dea01dc9f7a136a797527e21f 0 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a2584b9dea01dc9f7a136a797527e21f 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rNa 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rNa 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rNa 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=75a1190f05b3b81bb10173126cfd23b6284343c3526eb12b4294c956a03ba851 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cH5 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 75a1190f05b3b81bb10173126cfd23b6284343c3526eb12b4294c956a03ba851 3 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 75a1190f05b3b81bb10173126cfd23b6284343c3526eb12b4294c956a03ba851 3 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=75a1190f05b3b81bb10173126cfd23b6284343c3526eb12b4294c956a03ba851 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cH5 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cH5 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cH5 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50573f985f3504ff00a86371dba3c0f39914d9e1ee37baad 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cDV 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50573f985f3504ff00a86371dba3c0f39914d9e1ee37baad 0 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50573f985f3504ff00a86371dba3c0f39914d9e1ee37baad 0 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50573f985f3504ff00a86371dba3c0f39914d9e1ee37baad 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:20.482 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cDV 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cDV 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cDV 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.740 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0e420283d6129d8d46ae0ed9963ba09dd59614b37c712cd 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cwY 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0e420283d6129d8d46ae0ed9963ba09dd59614b37c712cd 2 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0e420283d6129d8d46ae0ed9963ba09dd59614b37c712cd 2 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0e420283d6129d8d46ae0ed9963ba09dd59614b37c712cd 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cwY 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cwY 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.cwY 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=617a0c476b19358ccdfd0fdd767ff1b5 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RZq 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 617a0c476b19358ccdfd0fdd767ff1b5 1 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 617a0c476b19358ccdfd0fdd767ff1b5 1 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=617a0c476b19358ccdfd0fdd767ff1b5 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RZq 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RZq 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RZq 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bdd85976299e99ef508a1442c8d15588 00:33:20.741 11:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Pz5 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bdd85976299e99ef508a1442c8d15588 1 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bdd85976299e99ef508a1442c8d15588 1 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bdd85976299e99ef508a1442c8d15588 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Pz5 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Pz5 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Pz5 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fab73b80e48c82e1699f8aeb3ac3d98aeb7609a9bdfff94e 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oVI 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fab73b80e48c82e1699f8aeb3ac3d98aeb7609a9bdfff94e 2 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fab73b80e48c82e1699f8aeb3ac3d98aeb7609a9bdfff94e 2 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fab73b80e48c82e1699f8aeb3ac3d98aeb7609a9bdfff94e 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oVI 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oVI 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oVI 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3c22f2731c9aec5b20cfdeddfc214684 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Aj5 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3c22f2731c9aec5b20cfdeddfc214684 0 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3c22f2731c9aec5b20cfdeddfc214684 0 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3c22f2731c9aec5b20cfdeddfc214684 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:20.741 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Aj5 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Aj5 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Aj5 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=86f044c84e89eb33f8cf807b04e1a908c060e28acd92ae8578eb7f4cc35a4909 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3pw 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 86f044c84e89eb33f8cf807b04e1a908c060e28acd92ae8578eb7f4cc35a4909 3 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 86f044c84e89eb33f8cf807b04e1a908c060e28acd92ae8578eb7f4cc35a4909 3 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=86f044c84e89eb33f8cf807b04e1a908c060e28acd92ae8578eb7f4cc35a4909 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3pw 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3pw 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3pw 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3962058 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3962058 ']' 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:21.014 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rNa 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cH5 ]] 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cH5 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cDV 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.cwY ]] 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cwY 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RZq 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.315 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Pz5 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pz5 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oVI 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Aj5 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Aj5 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3pw 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:21.316 11:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:22.691 Waiting for block devices as requested 00:33:22.691 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:22.691 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:22.691 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:22.691 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:22.948 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:22.948 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:22.948 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:22.948 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:23.206 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:23.206 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:23.206 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:23.206 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:23.465 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:23.465 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:23.465 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:23.465 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:23.724 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:23.984 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:24.243 No valid GPT data, bailing 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:24.243 00:33:24.243 Discovery Log Number of Records 2, Generation counter 2 00:33:24.243 =====Discovery Log Entry 0====== 00:33:24.243 trtype: tcp 00:33:24.243 adrfam: ipv4 00:33:24.243 subtype: current discovery subsystem 00:33:24.243 treq: not specified, sq flow control disable supported 00:33:24.243 portid: 1 00:33:24.243 trsvcid: 4420 00:33:24.243 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:24.243 traddr: 10.0.0.1 00:33:24.243 eflags: none 00:33:24.243 sectype: none 00:33:24.243 =====Discovery Log Entry 1====== 00:33:24.243 trtype: tcp 00:33:24.243 adrfam: ipv4 00:33:24.243 subtype: nvme subsystem 00:33:24.243 treq: not specified, sq flow control disable supported 00:33:24.243 portid: 1 00:33:24.243 trsvcid: 4420 00:33:24.243 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:24.243 traddr: 10.0.0.1 00:33:24.243 eflags: none 00:33:24.243 sectype: none 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.243 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.244 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.502 nvme0n1 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.502 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.760 nvme0n1 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.760 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.761 11:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.761 nvme0n1 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.761 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.019 nvme0n1 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.019 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.279 nvme0n1 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:25.279 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.280 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:25.280 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:25.280 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:25.280 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.280 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.280 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.539 nvme0n1 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.539 11:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.797 nvme0n1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.797 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.066 nvme0n1 00:33:26.066 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.067 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.325 nvme0n1 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.325 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.583 nvme0n1 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:26.583 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.584 11:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.842 nvme0n1 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.842 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.100 nvme0n1 00:33:27.100 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.100 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.100 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.100 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.100 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.360 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.620 nvme0n1 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:27.620 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.621 11:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.880 nvme0n1 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.880 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.140 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.398 nvme0n1 00:33:28.398 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.398 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.398 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.399 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.657 nvme0n1 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.657 11:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.657 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.658 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.224 nvme0n1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.224 11:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.791 nvme0n1 00:33:29.791 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.791 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.791 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.791 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.791 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.791 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.049 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.050 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.620 nvme0n1 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.620 11:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.189 nvme0n1 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.189 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.757 nvme0n1 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.757 11:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.698 nvme0n1 00:33:32.698 11:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.698 11:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.698 11:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.698 11:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.698 11:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.698 11:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.698 11:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.632 nvme0n1 00:33:33.632 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.632 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.632 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.632 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.632 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.632 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.892 11:45:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.833 nvme0n1 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.833 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 nvme0n1 00:33:35.772 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.772 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.772 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 11:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 11:45:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.712 nvme0n1 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.712 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.713 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.972 nvme0n1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.972 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.231 nvme0n1 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.231 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.489 nvme0n1 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.489 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.748 nvme0n1 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.748 11:45:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.748 nvme0n1 00:33:37.748 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.748 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.748 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.748 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.748 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:38.007 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.008 nvme0n1 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.008 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.266 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.266 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.266 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.266 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.266 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.267 nvme0n1 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.267 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.525 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.784 nvme0n1 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:38.784 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.785 11:45:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.044 nvme0n1 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.044 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.044 nvme0n1 00:33:39.045 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.303 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.563 nvme0n1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.563 11:45:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.824 nvme0n1 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:39.824 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.825 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.082 nvme0n1 00:33:40.082 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.082 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.082 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.082 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.082 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.342 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 nvme0n1 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.603 11:45:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.862 nvme0n1 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:40.862 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.863 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.120 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.686 nvme0n1 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:41.686 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.687 11:45:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.255 nvme0n1 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.255 11:45:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.824 nvme0n1 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.824 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.392 nvme0n1 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.392 11:45:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.958 nvme0n1 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.958 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.959 11:45:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 nvme0n1 00:33:44.896 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.896 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.896 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.896 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.896 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.154 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.154 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.154 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.154 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.155 11:45:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.089 nvme0n1 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.089 11:45:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 nvme0n1 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.029 11:45:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.965 nvme0n1 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.965 11:45:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.900 nvme0n1 00:33:48.900 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.900 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.900 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.900 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.900 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.900 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.159 nvme0n1 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.159 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:49.419 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.420 nvme0n1 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.420 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.679 nvme0n1 00:33:49.679 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.679 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.679 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.679 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.679 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.679 11:45:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:49.679 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.680 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.939 nvme0n1 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.939 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.199 nvme0n1 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.199 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.477 nvme0n1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.477 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.756 nvme0n1 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.756 11:45:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.756 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.021 nvme0n1 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.021 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.022 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.282 nvme0n1 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.282 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.283 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.543 nvme0n1 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.543 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.544 11:45:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.803 nvme0n1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.803 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.369 nvme0n1 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:52.369 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.370 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.628 nvme0n1 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.628 11:45:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.888 nvme0n1 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.888 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.456 nvme0n1 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:53.456 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.457 11:45:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.025 nvme0n1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.025 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.592 nvme0n1 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.592 11:45:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.162 nvme0n1 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.162 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.163 11:45:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.731 nvme0n1 00:33:55.731 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.731 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.732 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 nvme0n1 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTI1ODRiOWRlYTAxZGM5ZjdhMTM2YTc5NzUyN2UyMWZPGVg5: 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVhMTE5MGYwNWIzYjgxYmIxMDE3MzEyNmNmZDIzYjYyODQzNDNjMzUyNmViMTJiNDI5NGM5NTZhMDNiYTg1MQfNTuQ=: 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.303 11:45:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.240 nvme0n1 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.240 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.500 11:45:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.437 nvme0n1 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:58.437 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.438 11:45:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.373 nvme0n1 00:33:59.373 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.373 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.373 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.373 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.373 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFiNzNiODBlNDhjODJlMTY5OWY4YWViM2FjM2Q5OGFlYjc2MDlhOWJkZmZmOTRlTPB8Fw==: 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2MyMmYyNzMxYzlhZWM1YjIwY2ZkZWRkZmMyMTQ2ODRk+oUL: 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.374 11:45:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.313 nvme0n1 00:34:00.313 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.313 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.313 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.313 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.313 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.313 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODZmMDQ0Yzg0ZTg5ZWIzM2Y4Y2Y4MDdiMDRlMWE5MDhjMDYwZTI4YWNkOTJhZTg1NzhlYjdmNGNjMzVhNDkwOd2VR+Q=: 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.573 11:46:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.509 nvme0n1 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:01.509 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.510 request: 00:34:01.510 { 00:34:01.510 "name": "nvme0", 00:34:01.510 "trtype": "tcp", 00:34:01.510 "traddr": "10.0.0.1", 00:34:01.510 "adrfam": "ipv4", 00:34:01.510 "trsvcid": "4420", 00:34:01.510 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:01.510 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:01.510 "prchk_reftag": false, 00:34:01.510 "prchk_guard": false, 00:34:01.510 "hdgst": false, 00:34:01.510 "ddgst": false, 00:34:01.510 "allow_unrecognized_csi": false, 00:34:01.510 "method": "bdev_nvme_attach_controller", 00:34:01.510 "req_id": 1 00:34:01.510 } 00:34:01.510 Got JSON-RPC error response 00:34:01.510 response: 00:34:01.510 { 00:34:01.510 "code": -5, 00:34:01.510 "message": "Input/output error" 00:34:01.510 } 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.510 request: 00:34:01.510 { 00:34:01.510 "name": "nvme0", 00:34:01.510 "trtype": "tcp", 00:34:01.510 "traddr": "10.0.0.1", 00:34:01.510 "adrfam": "ipv4", 00:34:01.510 "trsvcid": "4420", 00:34:01.510 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:01.510 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:01.510 "prchk_reftag": false, 00:34:01.510 "prchk_guard": false, 00:34:01.510 "hdgst": false, 00:34:01.510 "ddgst": false, 00:34:01.510 "dhchap_key": "key2", 00:34:01.510 "allow_unrecognized_csi": false, 00:34:01.510 "method": "bdev_nvme_attach_controller", 00:34:01.510 "req_id": 1 00:34:01.510 } 00:34:01.510 Got JSON-RPC error response 00:34:01.510 response: 00:34:01.510 { 00:34:01.510 "code": -5, 00:34:01.510 "message": "Input/output error" 00:34:01.510 } 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.510 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.770 request: 00:34:01.770 { 00:34:01.770 "name": "nvme0", 00:34:01.770 "trtype": "tcp", 00:34:01.770 "traddr": "10.0.0.1", 00:34:01.770 "adrfam": "ipv4", 00:34:01.770 "trsvcid": "4420", 00:34:01.770 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:01.770 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:01.770 "prchk_reftag": false, 00:34:01.770 "prchk_guard": false, 00:34:01.770 "hdgst": false, 00:34:01.770 "ddgst": false, 00:34:01.770 "dhchap_key": "key1", 00:34:01.770 "dhchap_ctrlr_key": "ckey2", 00:34:01.770 "allow_unrecognized_csi": false, 00:34:01.770 "method": "bdev_nvme_attach_controller", 00:34:01.770 "req_id": 1 00:34:01.770 } 00:34:01.770 Got JSON-RPC error response 00:34:01.770 response: 00:34:01.770 { 00:34:01.770 "code": -5, 00:34:01.770 "message": "Input/output error" 00:34:01.770 } 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:01.770 11:46:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.770 nvme0n1 00:34:01.770 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.771 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.031 request: 00:34:02.031 { 00:34:02.031 "name": "nvme0", 00:34:02.031 "dhchap_key": "key1", 00:34:02.031 "dhchap_ctrlr_key": "ckey2", 00:34:02.031 "method": "bdev_nvme_set_keys", 00:34:02.031 "req_id": 1 00:34:02.031 } 00:34:02.031 Got JSON-RPC error response 00:34:02.031 response: 00:34:02.031 { 00:34:02.031 "code": -13, 00:34:02.031 "message": "Permission denied" 00:34:02.031 } 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:02.031 11:46:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:03.408 11:46:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA1NzNmOTg1ZjM1MDRmZjAwYTg2MzcxZGJhM2MwZjM5OTE0ZDllMWVlMzdiYWFkdMxfqA==: 00:34:04.345 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjBlNDIwMjgzZDYxMjlkOGQ0NmFlMGVkOTk2M2JhMDlkZDU5NjE0YjM3YzcxMmNkZaofZw==: 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.346 nvme0n1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjE3YTBjNDc2YjE5MzU4Y2NkZmQwZmRkNzY3ZmYxYjVdFqkp: 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmRkODU5NzYyOTllOTllZjUwOGExNDQyYzhkMTU1ODjy9hNE: 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.346 request: 00:34:04.346 { 00:34:04.346 "name": "nvme0", 00:34:04.346 "dhchap_key": "key2", 00:34:04.346 "dhchap_ctrlr_key": "ckey1", 00:34:04.346 "method": "bdev_nvme_set_keys", 00:34:04.346 "req_id": 1 00:34:04.346 } 00:34:04.346 Got JSON-RPC error response 00:34:04.346 response: 00:34:04.346 { 00:34:04.346 "code": -13, 00:34:04.346 "message": "Permission denied" 00:34:04.346 } 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.346 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.604 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:04.604 11:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.543 rmmod nvme_tcp 00:34:05.543 rmmod nvme_fabrics 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3962058 ']' 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3962058 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3962058 ']' 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3962058 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3962058 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3962058' 00:34:05.543 killing process with pid 3962058 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3962058 00:34:05.543 11:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3962058 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.802 11:46:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.335 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:08.335 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:08.335 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:08.335 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:08.335 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:08.335 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:08.336 11:46:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:09.273 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:09.273 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:09.273 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:10.212 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:10.212 11:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rNa /tmp/spdk.key-null.cDV /tmp/spdk.key-sha256.RZq /tmp/spdk.key-sha384.oVI /tmp/spdk.key-sha512.3pw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:10.212 11:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:11.592 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:11.592 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:11.592 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:11.592 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:11.592 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:11.592 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:11.592 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:11.592 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:11.592 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:11.592 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:11.592 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:11.592 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:11.592 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:11.592 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:11.592 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:11.592 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:11.592 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:11.592 00:34:11.592 real 0m53.716s 00:34:11.592 user 0m51.357s 00:34:11.592 sys 0m6.112s 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.592 ************************************ 00:34:11.592 END TEST nvmf_auth_host 00:34:11.592 ************************************ 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.592 ************************************ 00:34:11.592 START TEST nvmf_digest 00:34:11.592 ************************************ 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:11.592 * Looking for test storage... 00:34:11.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:34:11.592 11:46:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.851 --rc genhtml_branch_coverage=1 00:34:11.851 --rc genhtml_function_coverage=1 00:34:11.851 --rc genhtml_legend=1 00:34:11.851 --rc geninfo_all_blocks=1 00:34:11.851 --rc geninfo_unexecuted_blocks=1 00:34:11.851 00:34:11.851 ' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.851 --rc genhtml_branch_coverage=1 00:34:11.851 --rc genhtml_function_coverage=1 00:34:11.851 --rc genhtml_legend=1 00:34:11.851 --rc geninfo_all_blocks=1 00:34:11.851 --rc geninfo_unexecuted_blocks=1 00:34:11.851 00:34:11.851 ' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.851 --rc genhtml_branch_coverage=1 00:34:11.851 --rc genhtml_function_coverage=1 00:34:11.851 --rc genhtml_legend=1 00:34:11.851 --rc geninfo_all_blocks=1 00:34:11.851 --rc geninfo_unexecuted_blocks=1 00:34:11.851 00:34:11.851 ' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.851 --rc genhtml_branch_coverage=1 00:34:11.851 --rc genhtml_function_coverage=1 00:34:11.851 --rc genhtml_legend=1 00:34:11.851 --rc geninfo_all_blocks=1 00:34:11.851 --rc geninfo_unexecuted_blocks=1 00:34:11.851 00:34:11.851 ' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.851 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:11.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.852 11:46:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:13.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:13.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:13.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:13.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:13.758 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.017 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.017 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.017 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.017 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:34:14.018 00:34:14.018 --- 10.0.0.2 ping statistics --- 00:34:14.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.018 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:34:14.018 00:34:14.018 --- 10.0.0.1 ping statistics --- 00:34:14.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.018 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.018 ************************************ 00:34:14.018 START TEST nvmf_digest_clean 00:34:14.018 ************************************ 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3971939 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3971939 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3971939 ']' 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:14.018 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.018 [2024-11-02 11:46:14.273022] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:14.018 [2024-11-02 11:46:14.273113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.018 [2024-11-02 11:46:14.345614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.018 [2024-11-02 11:46:14.389278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.018 [2024-11-02 11:46:14.389337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.018 [2024-11-02 11:46:14.389351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.018 [2024-11-02 11:46:14.389363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.018 [2024-11-02 11:46:14.389372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.018 [2024-11-02 11:46:14.389957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.276 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.277 null0 00:34:14.277 [2024-11-02 11:46:14.639831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.277 [2024-11-02 11:46:14.664074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3971963 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3971963 /var/tmp/bperf.sock 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3971963 ']' 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:14.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:14.277 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.535 [2024-11-02 11:46:14.714886] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:14.535 [2024-11-02 11:46:14.714973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971963 ] 00:34:14.535 [2024-11-02 11:46:14.780856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.535 [2024-11-02 11:46:14.825389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.793 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:14.793 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:14.793 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:14.793 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:14.793 11:46:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:15.051 11:46:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:15.051 11:46:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:15.310 nvme0n1 00:34:15.310 11:46:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:15.310 11:46:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:15.569 Running I/O for 2 seconds... 00:34:17.442 13030.00 IOPS, 50.90 MiB/s [2024-11-02T10:46:17.844Z] 13019.50 IOPS, 50.86 MiB/s 00:34:17.442 Latency(us) 00:34:17.442 [2024-11-02T10:46:17.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.442 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:17.442 nvme0n1 : 2.01 13043.52 50.95 0.00 0.00 9795.57 4830.25 19709.35 00:34:17.442 [2024-11-02T10:46:17.844Z] =================================================================================================================== 00:34:17.442 [2024-11-02T10:46:17.844Z] Total : 13043.52 50.95 0.00 0.00 9795.57 4830.25 19709.35 00:34:17.442 { 00:34:17.442 "results": [ 00:34:17.442 { 00:34:17.442 "job": "nvme0n1", 00:34:17.442 "core_mask": "0x2", 00:34:17.442 "workload": "randread", 00:34:17.442 "status": "finished", 00:34:17.442 "queue_depth": 128, 00:34:17.442 "io_size": 4096, 00:34:17.442 "runtime": 2.010961, 00:34:17.442 "iops": 13043.515015955058, 00:34:17.442 "mibps": 50.951230531074444, 00:34:17.442 "io_failed": 0, 00:34:17.442 "io_timeout": 0, 00:34:17.442 "avg_latency_us": 9795.566099772666, 00:34:17.442 "min_latency_us": 4830.245925925926, 00:34:17.442 "max_latency_us": 19709.345185185186 00:34:17.442 } 00:34:17.442 ], 00:34:17.442 "core_count": 1 00:34:17.442 } 00:34:17.442 11:46:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:17.442 11:46:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:17.442 11:46:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:17.442 11:46:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:17.442 11:46:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:17.442 | select(.opcode=="crc32c") 00:34:17.442 | "\(.module_name) \(.executed)"' 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3971963 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3971963 ']' 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3971963 00:34:17.701 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:17.959 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:17.959 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3971963 00:34:17.959 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:17.959 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:17.959 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3971963' 00:34:17.959 killing process with pid 3971963 00:34:17.959 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3971963 00:34:17.959 Received shutdown signal, test time was about 2.000000 seconds 00:34:17.959 00:34:17.959 Latency(us) 00:34:17.959 [2024-11-02T10:46:18.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.959 [2024-11-02T10:46:18.361Z] =================================================================================================================== 00:34:17.959 [2024-11-02T10:46:18.362Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3971963 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3972375 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3972375 /var/tmp/bperf.sock 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3972375 ']' 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:17.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:17.960 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:18.218 [2024-11-02 11:46:18.393657] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:18.218 [2024-11-02 11:46:18.393749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972375 ] 00:34:18.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:18.218 Zero copy mechanism will not be used. 00:34:18.218 [2024-11-02 11:46:18.470667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.218 [2024-11-02 11:46:18.521479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.477 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:18.477 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:18.477 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:18.477 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:18.477 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:18.735 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:18.735 11:46:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:18.993 nvme0n1 00:34:18.993 11:46:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:18.993 11:46:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:19.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:19.251 Zero copy mechanism will not be used. 00:34:19.251 Running I/O for 2 seconds... 00:34:21.128 3792.00 IOPS, 474.00 MiB/s [2024-11-02T10:46:21.530Z] 3726.50 IOPS, 465.81 MiB/s 00:34:21.128 Latency(us) 00:34:21.128 [2024-11-02T10:46:21.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.128 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:21.128 nvme0n1 : 2.01 3725.06 465.63 0.00 0.00 4290.75 879.88 12233.39 00:34:21.128 [2024-11-02T10:46:21.530Z] =================================================================================================================== 00:34:21.128 [2024-11-02T10:46:21.530Z] Total : 3725.06 465.63 0.00 0.00 4290.75 879.88 12233.39 00:34:21.128 { 00:34:21.128 "results": [ 00:34:21.128 { 00:34:21.128 "job": "nvme0n1", 00:34:21.128 "core_mask": "0x2", 00:34:21.128 "workload": "randread", 00:34:21.128 "status": "finished", 00:34:21.128 "queue_depth": 16, 00:34:21.128 "io_size": 131072, 00:34:21.128 "runtime": 2.005067, 00:34:21.128 "iops": 3725.062554019392, 00:34:21.128 "mibps": 465.632819252424, 00:34:21.128 "io_failed": 0, 00:34:21.128 "io_timeout": 0, 00:34:21.128 "avg_latency_us": 4290.749858129652, 00:34:21.128 "min_latency_us": 879.8814814814815, 00:34:21.128 "max_latency_us": 12233.386666666667 00:34:21.128 } 00:34:21.128 ], 00:34:21.128 "core_count": 1 00:34:21.128 } 00:34:21.128 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:21.128 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:21.128 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:21.128 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:21.128 | select(.opcode=="crc32c") 00:34:21.128 | "\(.module_name) \(.executed)"' 00:34:21.128 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3972375 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3972375 ']' 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3972375 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:21.386 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3972375 00:34:21.645 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:21.645 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:21.645 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3972375' 00:34:21.645 killing process with pid 3972375 00:34:21.645 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3972375 00:34:21.645 Received shutdown signal, test time was about 2.000000 seconds 00:34:21.645 00:34:21.645 Latency(us) 00:34:21.645 [2024-11-02T10:46:22.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.645 [2024-11-02T10:46:22.047Z] =================================================================================================================== 00:34:21.645 [2024-11-02T10:46:22.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:21.645 11:46:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3972375 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3972895 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3972895 /var/tmp/bperf.sock 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3972895 ']' 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:21.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:21.645 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:21.903 [2024-11-02 11:46:22.051986] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:21.903 [2024-11-02 11:46:22.052077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972895 ] 00:34:21.903 [2024-11-02 11:46:22.123836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.903 [2024-11-02 11:46:22.170615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.903 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:21.903 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:21.903 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:21.903 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:21.903 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:22.501 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:22.501 11:46:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:22.785 nvme0n1 00:34:22.785 11:46:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:22.785 11:46:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:22.785 Running I/O for 2 seconds... 00:34:25.099 18331.00 IOPS, 71.61 MiB/s [2024-11-02T10:46:25.501Z] 18445.50 IOPS, 72.05 MiB/s 00:34:25.099 Latency(us) 00:34:25.099 [2024-11-02T10:46:25.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.099 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.099 nvme0n1 : 2.01 18446.13 72.06 0.00 0.00 6923.27 2961.26 9854.67 00:34:25.099 [2024-11-02T10:46:25.501Z] =================================================================================================================== 00:34:25.099 [2024-11-02T10:46:25.501Z] Total : 18446.13 72.06 0.00 0.00 6923.27 2961.26 9854.67 00:34:25.099 { 00:34:25.099 "results": [ 00:34:25.099 { 00:34:25.099 "job": "nvme0n1", 00:34:25.099 "core_mask": "0x2", 00:34:25.099 "workload": "randwrite", 00:34:25.099 "status": "finished", 00:34:25.099 "queue_depth": 128, 00:34:25.099 "io_size": 4096, 00:34:25.099 "runtime": 2.006871, 00:34:25.099 "iops": 18446.12832613556, 00:34:25.099 "mibps": 72.05518877396703, 00:34:25.099 "io_failed": 0, 00:34:25.099 "io_timeout": 0, 00:34:25.099 "avg_latency_us": 6923.2662610691405, 00:34:25.099 "min_latency_us": 2961.256296296296, 00:34:25.099 "max_latency_us": 9854.672592592593 00:34:25.099 } 00:34:25.099 ], 00:34:25.099 "core_count": 1 00:34:25.099 } 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:25.099 | select(.opcode=="crc32c") 00:34:25.099 | "\(.module_name) \(.executed)"' 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3972895 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3972895 ']' 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3972895 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:25.099 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3972895 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3972895' 00:34:25.357 killing process with pid 3972895 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3972895 00:34:25.357 Received shutdown signal, test time was about 2.000000 seconds 00:34:25.357 00:34:25.357 Latency(us) 00:34:25.357 [2024-11-02T10:46:25.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.357 [2024-11-02T10:46:25.759Z] =================================================================================================================== 00:34:25.357 [2024-11-02T10:46:25.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3972895 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3973303 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3973303 /var/tmp/bperf.sock 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3973303 ']' 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:25.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:25.357 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 [2024-11-02 11:46:25.720278] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:25.357 [2024-11-02 11:46:25.720373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973303 ] 00:34:25.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:25.357 Zero copy mechanism will not be used. 00:34:25.615 [2024-11-02 11:46:25.788281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.616 [2024-11-02 11:46:25.836140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.616 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:25.616 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:34:25.616 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:25.616 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:25.616 11:46:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:26.183 11:46:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:26.183 11:46:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:26.443 nvme0n1 00:34:26.443 11:46:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:26.443 11:46:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:26.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:26.443 Zero copy mechanism will not be used. 00:34:26.443 Running I/O for 2 seconds... 00:34:28.780 3326.00 IOPS, 415.75 MiB/s [2024-11-02T10:46:29.182Z] 3401.00 IOPS, 425.12 MiB/s 00:34:28.780 Latency(us) 00:34:28.780 [2024-11-02T10:46:29.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.780 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:28.780 nvme0n1 : 2.01 3399.14 424.89 0.00 0.00 4695.78 2415.12 7815.77 00:34:28.780 [2024-11-02T10:46:29.182Z] =================================================================================================================== 00:34:28.780 [2024-11-02T10:46:29.182Z] Total : 3399.14 424.89 0.00 0.00 4695.78 2415.12 7815.77 00:34:28.780 { 00:34:28.780 "results": [ 00:34:28.780 { 00:34:28.780 "job": "nvme0n1", 00:34:28.780 "core_mask": "0x2", 00:34:28.780 "workload": "randwrite", 00:34:28.780 "status": "finished", 00:34:28.780 "queue_depth": 16, 00:34:28.780 "io_size": 131072, 00:34:28.780 "runtime": 2.006682, 00:34:28.780 "iops": 3399.143461694479, 00:34:28.780 "mibps": 424.89293271180986, 00:34:28.781 "io_failed": 0, 00:34:28.781 "io_timeout": 0, 00:34:28.781 "avg_latency_us": 4695.777862483506, 00:34:28.781 "min_latency_us": 2415.122962962963, 00:34:28.781 "max_latency_us": 7815.774814814815 00:34:28.781 } 00:34:28.781 ], 00:34:28.781 "core_count": 1 00:34:28.781 } 00:34:28.781 11:46:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:28.781 11:46:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:28.781 11:46:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:28.781 11:46:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:28.781 | select(.opcode=="crc32c") 00:34:28.781 | "\(.module_name) \(.executed)"' 00:34:28.781 11:46:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3973303 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3973303 ']' 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3973303 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3973303 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3973303' 00:34:28.781 killing process with pid 3973303 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3973303 00:34:28.781 Received shutdown signal, test time was about 2.000000 seconds 00:34:28.781 00:34:28.781 Latency(us) 00:34:28.781 [2024-11-02T10:46:29.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.781 [2024-11-02T10:46:29.183Z] =================================================================================================================== 00:34:28.781 [2024-11-02T10:46:29.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:28.781 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3973303 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3971939 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3971939 ']' 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3971939 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3971939 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3971939' 00:34:29.040 killing process with pid 3971939 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3971939 00:34:29.040 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3971939 00:34:29.298 00:34:29.298 real 0m15.320s 00:34:29.298 user 0m30.381s 00:34:29.298 sys 0m4.233s 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:29.298 ************************************ 00:34:29.298 END TEST nvmf_digest_clean 00:34:29.298 ************************************ 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.298 ************************************ 00:34:29.298 START TEST nvmf_digest_error 00:34:29.298 ************************************ 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3973743 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3973743 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3973743 ']' 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:29.298 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:29.298 [2024-11-02 11:46:29.644633] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:29.298 [2024-11-02 11:46:29.644710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.556 [2024-11-02 11:46:29.716331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.556 [2024-11-02 11:46:29.759060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.556 [2024-11-02 11:46:29.759115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.556 [2024-11-02 11:46:29.759139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.556 [2024-11-02 11:46:29.759150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.556 [2024-11-02 11:46:29.759160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.556 [2024-11-02 11:46:29.759739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:29.556 [2024-11-02 11:46:29.896470] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.556 11:46:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:29.814 null0 00:34:29.814 [2024-11-02 11:46:30.011017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.814 [2024-11-02 11:46:30.035271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3973878 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3973878 /var/tmp/bperf.sock 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3973878 ']' 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:29.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:29.814 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:29.814 [2024-11-02 11:46:30.082037] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:29.814 [2024-11-02 11:46:30.082135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973878 ] 00:34:29.814 [2024-11-02 11:46:30.157088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.814 [2024-11-02 11:46:30.207021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.073 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:30.073 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:30.073 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:30.073 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:30.331 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:30.331 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.331 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:30.331 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.331 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:30.331 11:46:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:30.899 nvme0n1 00:34:30.899 11:46:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:30.899 11:46:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.899 11:46:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:30.899 11:46:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.899 11:46:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:30.899 11:46:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:30.899 Running I/O for 2 seconds... 00:34:30.899 [2024-11-02 11:46:31.159947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.160005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.160029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.176901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.176939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.176971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.189341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.189385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.189402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.204314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.204346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.204379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.221483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.221513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.221529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.237878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.237913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.237932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.251420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.251453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.251471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.266880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.266916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.266936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.278739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.278774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.278794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:30.899 [2024-11-02 11:46:31.293115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:30.899 [2024-11-02 11:46:31.293151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.899 [2024-11-02 11:46:31.293171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.160 [2024-11-02 11:46:31.307041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.160 [2024-11-02 11:46:31.307085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.160 [2024-11-02 11:46:31.307105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.160 [2024-11-02 11:46:31.320150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.160 [2024-11-02 11:46:31.320185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.160 [2024-11-02 11:46:31.320205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.160 [2024-11-02 11:46:31.334906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.160 [2024-11-02 11:46:31.334942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.160 [2024-11-02 11:46:31.334961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.160 [2024-11-02 11:46:31.350694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.160 [2024-11-02 11:46:31.350729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.160 [2024-11-02 11:46:31.350748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.160 [2024-11-02 11:46:31.361980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.160 [2024-11-02 11:46:31.362017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.160 [2024-11-02 11:46:31.362037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.160 [2024-11-02 11:46:31.379706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.379742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.379761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.394079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.394115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.394134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.407792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.407828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.407847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.421392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.421423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.421446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.435381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.435413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.435430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.449018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.449054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.449074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.462668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.462704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.462724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.476374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.476406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.476424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.489226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.489269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.489313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.504602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.504657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.517338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.517367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.517383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.532481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.544548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.544585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.544620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.161 [2024-11-02 11:46:31.560160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.161 [2024-11-02 11:46:31.560197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.161 [2024-11-02 11:46:31.560216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.578485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.578516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.578548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.596285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.596333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.596350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.610811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.610848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.610867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.623399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.623444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.623462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.637399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.637429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.637445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.653204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.653240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.653272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.665866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.665901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.665922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.681158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.681193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.681211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.695812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.695847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.711869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.711904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.711924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.724619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.724673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.741470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.741502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.741518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.757207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.757242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.757270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.769938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.769973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.769993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.788393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.788424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.788440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.799718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.799754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.799780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.422 [2024-11-02 11:46:31.814980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.422 [2024-11-02 11:46:31.815016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.422 [2024-11-02 11:46:31.815035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.829249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.829308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.829325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.842426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.842458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.842476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.858502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.858534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.858552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.871498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.871528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.871544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.887281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.887329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.887347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.903822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.903857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.903876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.916147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.916182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.916200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.930698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.930740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.930760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.946733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.946768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.946788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.960043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.960079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.960097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.974037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.974072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.974092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:31.989376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.682 [2024-11-02 11:46:31.989406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.682 [2024-11-02 11:46:31.989422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.682 [2024-11-02 11:46:32.006864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.683 [2024-11-02 11:46:32.006899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.683 [2024-11-02 11:46:32.006917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.683 [2024-11-02 11:46:32.022489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.683 [2024-11-02 11:46:32.022521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.683 [2024-11-02 11:46:32.022555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.683 [2024-11-02 11:46:32.034395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.683 [2024-11-02 11:46:32.034426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.683 [2024-11-02 11:46:32.034443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.683 [2024-11-02 11:46:32.051050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.683 [2024-11-02 11:46:32.051086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.683 [2024-11-02 11:46:32.051105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.683 [2024-11-02 11:46:32.062567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.683 [2024-11-02 11:46:32.062604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.683 [2024-11-02 11:46:32.062623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.683 [2024-11-02 11:46:32.079009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.683 [2024-11-02 11:46:32.079047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.683 [2024-11-02 11:46:32.079067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.095752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.095810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.108384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.108415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.108432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.126129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.126165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.126184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 17239.00 IOPS, 67.34 MiB/s [2024-11-02T10:46:32.345Z] [2024-11-02 11:46:32.143047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.143083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.143103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.160529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.160580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.160599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.176862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.176899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.176918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.188888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.188931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.188951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.204420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.204452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.204470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.217167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.217202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.217221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.230871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.230905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.230924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.246400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.246440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.943 [2024-11-02 11:46:32.246457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.943 [2024-11-02 11:46:32.258513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.943 [2024-11-02 11:46:32.258544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.258561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.944 [2024-11-02 11:46:32.273451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.944 [2024-11-02 11:46:32.273482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.273499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.944 [2024-11-02 11:46:32.285192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.944 [2024-11-02 11:46:32.285227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.285246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.944 [2024-11-02 11:46:32.300753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.944 [2024-11-02 11:46:32.300788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.300807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.944 [2024-11-02 11:46:32.316889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.944 [2024-11-02 11:46:32.316924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.316944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.944 [2024-11-02 11:46:32.328934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.944 [2024-11-02 11:46:32.328969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.328987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:31.944 [2024-11-02 11:46:32.342448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:31.944 [2024-11-02 11:46:32.342478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:31.944 [2024-11-02 11:46:32.342499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.204 [2024-11-02 11:46:32.356711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.204 [2024-11-02 11:46:32.356747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.204 [2024-11-02 11:46:32.356766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.204 [2024-11-02 11:46:32.370334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.204 [2024-11-02 11:46:32.370365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.204 [2024-11-02 11:46:32.370382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.204 [2024-11-02 11:46:32.385179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.204 [2024-11-02 11:46:32.385213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.204 [2024-11-02 11:46:32.385232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.204 [2024-11-02 11:46:32.398862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.204 [2024-11-02 11:46:32.398896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.411368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.411398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.411414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.425062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.425097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.425124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.440723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.440757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.440776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.453486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.453516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.453532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.467570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.467621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.467640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.480356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.480387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.480403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.494594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.494629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.494648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.510538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.510585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.510601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.522449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.522479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.522495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.538576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.538611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.538630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.553266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.553322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.553340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.567374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.567404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.567420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.579519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.579551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.579568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.205 [2024-11-02 11:46:32.593870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.205 [2024-11-02 11:46:32.593905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.205 [2024-11-02 11:46:32.593924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.608504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.608537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.608555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.624135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.624170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.624189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.642321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.642352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.642369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.653767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.653803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.653822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.670551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.670603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.670629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.685807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.685842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.685861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.698205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.698239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.698268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.713553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.713598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.726528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.726558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.726574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.741858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.741893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.741912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.754236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.754281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.754314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.768864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.768899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.768918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.785713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.785748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.785768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.798422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.798458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.798474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.813769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.813804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.813823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.829235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.829294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.466 [2024-11-02 11:46:32.829316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.466 [2024-11-02 11:46:32.841078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.466 [2024-11-02 11:46:32.841113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.467 [2024-11-02 11:46:32.841132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.467 [2024-11-02 11:46:32.857468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.467 [2024-11-02 11:46:32.857500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.467 [2024-11-02 11:46:32.857518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.727 [2024-11-02 11:46:32.874452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.727 [2024-11-02 11:46:32.874483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.727 [2024-11-02 11:46:32.874499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.727 [2024-11-02 11:46:32.890432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.727 [2024-11-02 11:46:32.890463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.727 [2024-11-02 11:46:32.890480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.727 [2024-11-02 11:46:32.903120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.727 [2024-11-02 11:46:32.903155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.727 [2024-11-02 11:46:32.903174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.727 [2024-11-02 11:46:32.917550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.727 [2024-11-02 11:46:32.917597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.727 [2024-11-02 11:46:32.917617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:32.935662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:32.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:32.935717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:32.951925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:32.951960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:32.951980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:32.964247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:32.964305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:32.964322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:32.979331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:32.979362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:32.979379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:32.992548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:32.992597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:32.992616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.007521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.007567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.007584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.024226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.024270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.024292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.036891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.036927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.036946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.054192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.054227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.054266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.067925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.067962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.067981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.081477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.081507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.081537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.096095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.096132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.096152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.110424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.110456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.110474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.728 [2024-11-02 11:46:33.123630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.728 [2024-11-02 11:46:33.123666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.728 [2024-11-02 11:46:33.123685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.987 17462.50 IOPS, 68.21 MiB/s [2024-11-02T10:46:33.389Z] [2024-11-02 11:46:33.141420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1806bf0) 00:34:32.987 [2024-11-02 11:46:33.141450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.987 [2024-11-02 11:46:33.141467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.987 00:34:32.987 Latency(us) 00:34:32.987 [2024-11-02T10:46:33.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.987 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:32.987 nvme0n1 : 2.01 17473.08 68.25 0.00 0.00 7316.86 3689.43 25826.04 00:34:32.987 [2024-11-02T10:46:33.389Z] =================================================================================================================== 00:34:32.987 [2024-11-02T10:46:33.389Z] Total : 17473.08 68.25 0.00 0.00 7316.86 3689.43 25826.04 00:34:32.987 { 00:34:32.987 "results": [ 00:34:32.987 { 00:34:32.987 "job": "nvme0n1", 00:34:32.987 "core_mask": "0x2", 00:34:32.987 "workload": "randread", 00:34:32.987 "status": "finished", 00:34:32.987 "queue_depth": 128, 00:34:32.987 "io_size": 4096, 00:34:32.987 "runtime": 2.006114, 00:34:32.987 "iops": 17473.084779828067, 00:34:32.987 "mibps": 68.25423742120338, 00:34:32.987 "io_failed": 0, 00:34:32.987 "io_timeout": 0, 00:34:32.987 "avg_latency_us": 7316.858193529164, 00:34:32.987 "min_latency_us": 3689.434074074074, 00:34:32.987 "max_latency_us": 25826.03851851852 00:34:32.987 } 00:34:32.987 ], 00:34:32.987 "core_count": 1 00:34:32.987 } 00:34:32.987 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:32.987 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:32.987 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:32.987 | .driver_specific 00:34:32.987 | .nvme_error 00:34:32.987 | .status_code 00:34:32.987 | .command_transient_transport_error' 00:34:32.987 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3973878 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3973878 ']' 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3973878 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3973878 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3973878' 00:34:33.246 killing process with pid 3973878 00:34:33.246 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3973878 00:34:33.246 Received shutdown signal, test time was about 2.000000 seconds 00:34:33.246 00:34:33.246 Latency(us) 00:34:33.246 [2024-11-02T10:46:33.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.247 [2024-11-02T10:46:33.649Z] =================================================================================================================== 00:34:33.247 [2024-11-02T10:46:33.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:33.247 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3973878 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3974286 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3974286 /var/tmp/bperf.sock 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3974286 ']' 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:33.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:33.506 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:33.506 [2024-11-02 11:46:33.739626] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:33.506 [2024-11-02 11:46:33.739708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974286 ] 00:34:33.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:33.506 Zero copy mechanism will not be used. 00:34:33.506 [2024-11-02 11:46:33.813500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.506 [2024-11-02 11:46:33.862691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.764 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:33.764 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:33.764 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:33.764 11:46:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:34.022 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:34.022 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.022 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:34.022 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.022 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.022 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:34.590 nvme0n1 00:34:34.590 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:34.590 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.590 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:34.590 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.590 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:34.590 11:46:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:34.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:34.590 Zero copy mechanism will not be used. 00:34:34.590 Running I/O for 2 seconds... 00:34:34.590 [2024-11-02 11:46:34.964978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.590 [2024-11-02 11:46:34.965032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.590 [2024-11-02 11:46:34.965052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.590 [2024-11-02 11:46:34.974056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.590 [2024-11-02 11:46:34.974089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.590 [2024-11-02 11:46:34.974106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.590 [2024-11-02 11:46:34.982865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.590 [2024-11-02 11:46:34.982897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.590 [2024-11-02 11:46:34.982914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.590 [2024-11-02 11:46:34.991925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.590 [2024-11-02 11:46:34.991960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.590 [2024-11-02 11:46:34.991978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.001438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.001473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.001492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.010958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.010990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.011007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.020582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.020629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.020645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.029620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.029652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.029670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.039038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.039087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.039104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.048175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.048207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.048232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.057643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.057690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.057708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.066491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.066524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.066555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.074957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.074990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.075007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.083473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.083507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.083540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.092020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.092068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.092085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.100742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.100777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.100809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.109398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.109432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.851 [2024-11-02 11:46:35.109451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.851 [2024-11-02 11:46:35.117852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.851 [2024-11-02 11:46:35.117900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.117917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.126432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.126485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.126504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.134921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.134984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.143444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.143478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.143495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.152038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.152086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.152102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.160538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.160570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.160604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.169029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.169078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.169094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.177568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.177600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.177632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.186105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.186144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.186164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.195214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.195252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.195284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.204318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.204366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.204384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.213400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.213447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.213463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.222499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.222547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.222564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.231836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.231874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.231896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.240845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.240881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.240900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:34.852 [2024-11-02 11:46:35.249982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:34.852 [2024-11-02 11:46:35.250041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:34.852 [2024-11-02 11:46:35.250064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.259040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.259096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.268064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.268101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.268121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.277005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.277043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.277070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.286077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.286129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.286150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.294980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.295016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.295050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.304029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.304065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.304084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.313391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.313436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.313453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.322411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.322443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.322461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.331520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.331551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.331586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.340680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.340718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.349738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.349776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.349796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.358709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.358746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.358766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.367619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.367653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.367672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.376417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.376449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.376466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.385398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.385430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.385446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.394264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.394314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.112 [2024-11-02 11:46:35.394332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.112 [2024-11-02 11:46:35.402959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.112 [2024-11-02 11:46:35.402996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.403015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.411705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.411739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.411757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.420451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.420482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.420513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.429354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.429387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.429425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.438180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.438237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.447029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.447066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.447086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.455820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.455857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.455877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.464573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.464623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.464643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.473296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.473344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.473361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.482114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.482149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.482168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.490882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.490919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.490940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.499702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.499741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.499762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.113 [2024-11-02 11:46:35.508580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.113 [2024-11-02 11:46:35.508622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.113 [2024-11-02 11:46:35.508641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.517545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.517597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.517614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.526479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.526537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.526555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.535591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.535642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.535662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.544132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.544169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.544190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.552665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.552698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.552715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.561176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.561214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.561233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.569687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.569738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.569757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.578281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.578318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.578352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.586926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.586963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.586982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.595462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.595495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.595512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.604070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.604118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.604137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.612845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.612895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.612916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.621589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.621639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.621659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.630215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.630253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.630284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.638984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.639034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.639055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.648133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.648168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.648186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.657072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.657109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.657135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.665915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.665952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.665972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.674642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.674679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.674698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.683310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.683358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.374 [2024-11-02 11:46:35.683375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.374 [2024-11-02 11:46:35.691946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.374 [2024-11-02 11:46:35.691983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.692003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.700702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.700740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.700759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.709555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.709586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.709618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.718372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.718404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.718436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.727173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.727207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.727225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.736014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.736057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.736080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.744781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.744818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.744838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.753567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.753605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.753626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.762371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.762402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.762419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.375 [2024-11-02 11:46:35.771148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.375 [2024-11-02 11:46:35.771200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.375 [2024-11-02 11:46:35.771221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.636 [2024-11-02 11:46:35.780142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.636 [2024-11-02 11:46:35.780180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.636 [2024-11-02 11:46:35.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.789099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.789136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.789157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.797882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.797920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.797941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.806686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.806724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.806744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.815566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.815597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.815634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.824495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.824543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.824559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.833417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.833448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.833465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.842276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.842323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.842339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.851160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.851197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.851217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.859968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.860006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.860026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.868868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.868905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.868926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.877737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.877774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.877796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.886733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.886790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.886811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.895507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.895539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.895556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.904390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.904439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.913102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.913152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.913172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.921915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.921952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.921971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.930702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.930740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.930760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.939512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.939543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.939575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.948346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.948392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.948409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.637 3474.00 IOPS, 434.25 MiB/s [2024-11-02T10:46:36.039Z] [2024-11-02 11:46:35.958783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.958821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.958841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.967849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.967883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.967902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.977062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.977098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.985944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.985981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.986002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:35.994711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:35.994748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:35.994769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:36.003589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:36.003640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:36.003660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:36.012463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.637 [2024-11-02 11:46:36.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.637 [2024-11-02 11:46:36.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.637 [2024-11-02 11:46:36.021462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.638 [2024-11-02 11:46:36.021494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.638 [2024-11-02 11:46:36.021511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.638 [2024-11-02 11:46:36.030337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.638 [2024-11-02 11:46:36.030371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.638 [2024-11-02 11:46:36.030388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.039267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.039304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.039343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.048200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.048237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.048271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.057251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.057311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.057329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.066230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.066279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.066301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.075095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.075132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.075152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.084147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.084184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.084204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.093184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.093221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.093240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.102232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.102279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.102301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.111309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.111347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.111384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.120314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.120372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.120390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.897 [2024-11-02 11:46:36.129228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.897 [2024-11-02 11:46:36.129273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.897 [2024-11-02 11:46:36.129295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.138046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.138096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.138116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.146697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.146734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.146754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.155613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.155664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.155684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.164473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.164504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.164521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.173431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.173473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.173489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.182185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.182221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.182241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.190927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.190965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.190984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.199678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.199715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.199735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.208321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.208357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.208375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.216882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.216919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.216950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.225574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.225612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.225632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.234308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.234341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.234359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.242912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.242945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.242962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.251599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.251649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.251669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.260124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.260157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.260175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.268714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.268758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.268778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.276930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.276961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.276978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.285476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.285532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.285564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:35.898 [2024-11-02 11:46:36.293931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:35.898 [2024-11-02 11:46:36.293963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.898 [2024-11-02 11:46:36.293998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.302712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.302775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.302794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.311197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.311245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.311281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.319717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.319768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.319793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.328200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.328236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.328265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.336692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.336729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.336748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.345312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.345343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.345360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.353936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.353987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.354007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.362540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.362573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.362591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.370984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.371020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.371041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.379744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.379781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.379803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.388342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.388389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.388407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.396959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.397032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.405581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.405629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.405649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.414330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.414376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.157 [2024-11-02 11:46:36.414399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.157 [2024-11-02 11:46:36.423072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.157 [2024-11-02 11:46:36.423123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.423143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.431877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.431911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.431930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.440927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.440964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.440984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.449772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.449809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.449837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.458621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.458657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.458677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.467370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.467399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.467418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.476217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.476253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.485112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.485146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.485172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.493944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.493987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.494008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.502815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.502852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.502871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.511588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.511639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.511660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.520314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.520346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.520378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.529178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.529215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.529235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.537902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.537938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.537959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.546769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.546802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.546821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.158 [2024-11-02 11:46:36.555643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.158 [2024-11-02 11:46:36.555680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.158 [2024-11-02 11:46:36.555700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.564463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.564509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.564531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.573482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.573512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.573533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.582274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.582321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.582340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.591007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.591058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.591078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.599933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.599969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.599989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.608815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.608852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.608872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.617599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.617649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.617668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.626563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.626611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.626631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.635316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.635364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.635382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.644355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.644403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.644441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.653544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.653578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.662395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.662429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.662446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.671277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.671325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.671341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.680252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.680314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.417 [2024-11-02 11:46:36.680332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.417 [2024-11-02 11:46:36.689122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.417 [2024-11-02 11:46:36.689158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.689177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.697950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.697988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.698008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.706850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.706887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.706907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.715761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.715796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.715815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.724533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.724591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.724612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.733632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.733684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.733703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.742454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.742486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.742503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.751331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.751379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.751396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.760320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.760370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.760388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.769156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.769192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.769211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.777930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.777980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.778000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.786850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.786899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.786919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.795617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.795651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.795697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.804232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.804318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.418 [2024-11-02 11:46:36.812986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.418 [2024-11-02 11:46:36.813020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.418 [2024-11-02 11:46:36.813054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.821840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.821886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.821906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.830737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.830774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.830795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.839506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.839554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.839573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.848432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.848481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.857384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.857416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.857433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.865998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.866034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.866054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.874928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.874971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.874992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.883694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.883730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.883750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.892584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.892621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.892641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.901412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.901457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.901473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.910213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.910250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.910286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.919080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.919116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.919136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.927889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.927925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.927945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.936516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.936562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.936581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:36.677 [2024-11-02 11:46:36.944933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.944967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.945001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:36.677 3496.50 IOPS, 437.06 MiB/s [2024-11-02T10:46:37.079Z] [2024-11-02 11:46:36.955398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d43710) 00:34:36.677 [2024-11-02 11:46:36.955431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.677 [2024-11-02 11:46:36.955450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:36.677 00:34:36.677 Latency(us) 00:34:36.677 [2024-11-02T10:46:37.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:36.677 nvme0n1 : 2.00 3498.49 437.31 0.00 0.00 4567.38 1480.63 12427.57 00:34:36.677 [2024-11-02T10:46:37.079Z] =================================================================================================================== 00:34:36.677 [2024-11-02T10:46:37.079Z] Total : 3498.49 437.31 0.00 0.00 4567.38 1480.63 12427.57 00:34:36.677 { 00:34:36.677 "results": [ 00:34:36.677 { 00:34:36.677 "job": "nvme0n1", 00:34:36.677 "core_mask": "0x2", 00:34:36.677 "workload": "randread", 00:34:36.677 "status": "finished", 00:34:36.677 "queue_depth": 16, 00:34:36.677 "io_size": 131072, 00:34:36.677 "runtime": 2.003437, 00:34:36.677 "iops": 3498.487848632126, 00:34:36.677 "mibps": 437.31098107901573, 00:34:36.677 "io_failed": 0, 00:34:36.677 "io_timeout": 0, 00:34:36.677 "avg_latency_us": 4567.378838847408, 00:34:36.677 "min_latency_us": 1480.628148148148, 00:34:36.677 "max_latency_us": 12427.567407407407 00:34:36.677 } 00:34:36.677 ], 00:34:36.677 "core_count": 1 00:34:36.677 } 00:34:36.677 11:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:36.677 11:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:36.677 11:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:36.677 11:46:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:36.677 | .driver_specific 00:34:36.677 | .nvme_error 00:34:36.677 | .status_code 00:34:36.677 | .command_transient_transport_error' 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3974286 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3974286 ']' 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3974286 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3974286 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3974286' 00:34:36.936 killing process with pid 3974286 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3974286 00:34:36.936 Received shutdown signal, test time was about 2.000000 seconds 00:34:36.936 00:34:36.936 Latency(us) 00:34:36.936 [2024-11-02T10:46:37.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.936 [2024-11-02T10:46:37.338Z] =================================================================================================================== 00:34:36.936 [2024-11-02T10:46:37.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:36.936 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3974286 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3974702 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3974702 /var/tmp/bperf.sock 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3974702 ']' 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:37.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:37.194 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:37.194 [2024-11-02 11:46:37.511213] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:37.194 [2024-11-02 11:46:37.511319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974702 ] 00:34:37.194 [2024-11-02 11:46:37.583750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.453 [2024-11-02 11:46:37.630621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.453 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:37.453 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:37.453 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:37.453 11:46:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:37.711 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:37.711 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.711 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:37.711 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.711 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:37.711 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.277 nvme0n1 00:34:38.277 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:38.277 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.277 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:38.277 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.277 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:38.277 11:46:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:38.277 Running I/O for 2 seconds... 00:34:38.277 [2024-11-02 11:46:38.615228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e27f0 00:34:38.277 [2024-11-02 11:46:38.616087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.277 [2024-11-02 11:46:38.616130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:38.277 [2024-11-02 11:46:38.629444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1868 00:34:38.277 [2024-11-02 11:46:38.630400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.277 [2024-11-02 11:46:38.630431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:38.277 [2024-11-02 11:46:38.643075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e49b0 00:34:38.277 [2024-11-02 11:46:38.643977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.277 [2024-11-02 11:46:38.644010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:38.277 [2024-11-02 11:46:38.656816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e1f80 00:34:38.277 [2024-11-02 11:46:38.657712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.277 [2024-11-02 11:46:38.657745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:38.277 [2024-11-02 11:46:38.669454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e01f8 00:34:38.277 [2024-11-02 11:46:38.670369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.277 [2024-11-02 11:46:38.670398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:38.536 [2024-11-02 11:46:38.683405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e27f0 00:34:38.536 [2024-11-02 11:46:38.684262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.536 [2024-11-02 11:46:38.684296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:38.536 [2024-11-02 11:46:38.699742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e27f0 00:34:38.536 [2024-11-02 11:46:38.701270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.536 [2024-11-02 11:46:38.701318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:38.536 [2024-11-02 11:46:38.713287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e38d0 00:34:38.536 [2024-11-02 11:46:38.714804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.536 [2024-11-02 11:46:38.714836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:38.536 [2024-11-02 11:46:38.725162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e3060 00:34:38.536 [2024-11-02 11:46:38.725986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.536 [2024-11-02 11:46:38.726019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:38.536 [2024-11-02 11:46:38.743056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166df118 00:34:38.536 [2024-11-02 11:46:38.745251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.536 [2024-11-02 11:46:38.745304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:38.536 [2024-11-02 11:46:38.756617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e3060 00:34:38.536 [2024-11-02 11:46:38.758781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.536 [2024-11-02 11:46:38.758813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.770179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166df988 00:34:38.537 [2024-11-02 11:46:38.772344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.772373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.783694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e27f0 00:34:38.537 [2024-11-02 11:46:38.785835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.785868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.797205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e01f8 00:34:38.537 [2024-11-02 11:46:38.799376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.799404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.810713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e1f80 00:34:38.537 [2024-11-02 11:46:38.812856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.824389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e0a68 00:34:38.537 [2024-11-02 11:46:38.826518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.826566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.836267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1868 00:34:38.537 [2024-11-02 11:46:38.837748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.837780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.849793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e5220 00:34:38.537 [2024-11-02 11:46:38.851276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.863508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ff3c8 00:34:38.537 [2024-11-02 11:46:38.865105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.865137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.875985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1430 00:34:38.537 [2024-11-02 11:46:38.877430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.877461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.889806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166feb58 00:34:38.537 [2024-11-02 11:46:38.891220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.891265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.906501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166feb58 00:34:38.537 [2024-11-02 11:46:38.908654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.908686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.918736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e6300 00:34:38.537 [2024-11-02 11:46:38.920176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.920209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:38.537 [2024-11-02 11:46:38.932566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f31b8 00:34:38.537 [2024-11-02 11:46:38.934092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.537 [2024-11-02 11:46:38.934125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:38.946739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e6b70 00:34:38.796 [2024-11-02 11:46:38.948198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:38.948231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:38.960677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fc998 00:34:38.796 [2024-11-02 11:46:38.962153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:38.962199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:38.973365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166efae0 00:34:38.796 [2024-11-02 11:46:38.974736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:38.974770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:38.990043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166efae0 00:34:38.796 [2024-11-02 11:46:38.992111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:38.992144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.003934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f4298 00:34:38.796 [2024-11-02 11:46:39.005968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.006001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.017693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ef270 00:34:38.796 [2024-11-02 11:46:39.019720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.019753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.031491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f4b08 00:34:38.796 [2024-11-02 11:46:39.033536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.033579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.046090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fc998 00:34:38.796 [2024-11-02 11:46:39.048242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.048283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.059659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e23b8 00:34:38.796 [2024-11-02 11:46:39.061780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.061812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.073188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fd208 00:34:38.796 [2024-11-02 11:46:39.075293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.075338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.085044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e1b48 00:34:38.796 [2024-11-02 11:46:39.086563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.086591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.098561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166edd58 00:34:38.796 [2024-11-02 11:46:39.100108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.100139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.112393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1ca0 00:34:38.796 [2024-11-02 11:46:39.113893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.113926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.126011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ee5c8 00:34:38.796 [2024-11-02 11:46:39.127564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.127607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.139653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f2510 00:34:38.796 [2024-11-02 11:46:39.141119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.141151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.153277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166eee38 00:34:38.796 [2024-11-02 11:46:39.154765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.154797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.167093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f2d80 00:34:38.796 [2024-11-02 11:46:39.168584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.168617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.180669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166de038 00:34:38.796 [2024-11-02 11:46:39.182104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.182144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:38.796 [2024-11-02 11:46:39.194324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f35f0 00:34:38.796 [2024-11-02 11:46:39.195825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:38.796 [2024-11-02 11:46:39.195857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:39.057 [2024-11-02 11:46:39.208123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166df988 00:34:39.057 [2024-11-02 11:46:39.209609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.057 [2024-11-02 11:46:39.209642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:39.057 [2024-11-02 11:46:39.221753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f3e60 00:34:39.057 [2024-11-02 11:46:39.223145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.057 [2024-11-02 11:46:39.223177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:39.057 [2024-11-02 11:46:39.235356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e01f8 00:34:39.057 [2024-11-02 11:46:39.236740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.057 [2024-11-02 11:46:39.236772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.057 [2024-11-02 11:46:39.249328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e7818 00:34:39.057 [2024-11-02 11:46:39.250894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.057 [2024-11-02 11:46:39.250926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:39.057 [2024-11-02 11:46:39.263242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e0a68 00:34:39.057 [2024-11-02 11:46:39.264739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.264771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.277036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fb480 00:34:39.058 [2024-11-02 11:46:39.278513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.278558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.290815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e12d8 00:34:39.058 [2024-11-02 11:46:39.292280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.292324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.303357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f57b0 00:34:39.058 [2024-11-02 11:46:39.304617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.304648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.316961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1430 00:34:39.058 [2024-11-02 11:46:39.318241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.318301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.330496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f6020 00:34:39.058 [2024-11-02 11:46:39.331751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.331783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.346753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f6020 00:34:39.058 [2024-11-02 11:46:39.348658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.348691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.360323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e5220 00:34:39.058 [2024-11-02 11:46:39.362202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.362235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.373821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f6890 00:34:39.058 [2024-11-02 11:46:39.375714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.375747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.387370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e5a90 00:34:39.058 [2024-11-02 11:46:39.389240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.389281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.400894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f7100 00:34:39.058 [2024-11-02 11:46:39.402785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.402818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.414405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e6300 00:34:39.058 [2024-11-02 11:46:39.416277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.416321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.427884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f7970 00:34:39.058 [2024-11-02 11:46:39.429726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.429759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.441409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e6b70 00:34:39.058 [2024-11-02 11:46:39.443245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.443304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:39.058 [2024-11-02 11:46:39.455030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f81e0 00:34:39.058 [2024-11-02 11:46:39.456902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.058 [2024-11-02 11:46:39.456935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.468748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e73e0 00:34:39.320 [2024-11-02 11:46:39.470576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.470623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.481612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ec408 00:34:39.320 [2024-11-02 11:46:39.483072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.483106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.494056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e0a68 00:34:39.320 [2024-11-02 11:46:39.495373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.495416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.507616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ec408 00:34:39.320 [2024-11-02 11:46:39.508915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.508948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.521310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e12d8 00:34:39.320 [2024-11-02 11:46:39.522607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.522640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.534878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ebb98 00:34:39.320 [2024-11-02 11:46:39.536164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.536202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.548427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1430 00:34:39.320 [2024-11-02 11:46:39.549661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.549693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.561981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166eb328 00:34:39.320 [2024-11-02 11:46:39.563223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.563254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.575496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f0bc0 00:34:39.320 [2024-11-02 11:46:39.576740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.576772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.589958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f0788 00:34:39.320 [2024-11-02 11:46:39.591391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.591434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.603592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e3d08 00:34:39.320 18542.00 IOPS, 72.43 MiB/s [2024-11-02T10:46:39.722Z] [2024-11-02 11:46:39.604996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.605029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.618080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f8e88 00:34:39.320 [2024-11-02 11:46:39.619674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.619707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.631650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166de470 00:34:39.320 [2024-11-02 11:46:39.633218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.633251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.645230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f96f8 00:34:39.320 [2024-11-02 11:46:39.646772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.646806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.658860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fb8b8 00:34:39.320 [2024-11-02 11:46:39.660417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.660445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.672532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f9f68 00:34:39.320 [2024-11-02 11:46:39.674074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.674106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.686447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fb048 00:34:39.320 [2024-11-02 11:46:39.687980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.688012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.700086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fa7d8 00:34:39.320 [2024-11-02 11:46:39.701618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.320 [2024-11-02 11:46:39.701651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:39.320 [2024-11-02 11:46:39.714723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f5be8 00:34:39.320 [2024-11-02 11:46:39.716464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.321 [2024-11-02 11:46:39.716492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.581 [2024-11-02 11:46:39.726873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ec840 00:34:39.581 [2024-11-02 11:46:39.727874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.581 [2024-11-02 11:46:39.727908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.581 [2024-11-02 11:46:39.739341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f9f68 00:34:39.581 [2024-11-02 11:46:39.740317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.581 [2024-11-02 11:46:39.740345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.581 [2024-11-02 11:46:39.752913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ebfd0 00:34:39.581 [2024-11-02 11:46:39.753883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.581 [2024-11-02 11:46:39.753915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.581 [2024-11-02 11:46:39.766468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f96f8 00:34:39.582 [2024-11-02 11:46:39.767469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.767497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.780182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166eb760 00:34:39.582 [2024-11-02 11:46:39.781134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.781166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.793867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f8e88 00:34:39.582 [2024-11-02 11:46:39.794818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.794851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.808374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e01f8 00:34:39.582 [2024-11-02 11:46:39.809507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.809536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.824721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e01f8 00:34:39.582 [2024-11-02 11:46:39.826526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.826554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.836679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f96f8 00:34:39.582 [2024-11-02 11:46:39.837780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.837811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.850230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e4578 00:34:39.582 [2024-11-02 11:46:39.851402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.851445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.862822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f8e88 00:34:39.582 [2024-11-02 11:46:39.863887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.863919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.876459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e4de8 00:34:39.582 [2024-11-02 11:46:39.877545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.877578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.892762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e4de8 00:34:39.582 [2024-11-02 11:46:39.894519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.894552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.906408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f3a28 00:34:39.582 [2024-11-02 11:46:39.908132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.908164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.918310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e5658 00:34:39.582 [2024-11-02 11:46:39.919383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.919411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.931774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f7538 00:34:39.582 [2024-11-02 11:46:39.932812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.932845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.945341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e5ec8 00:34:39.582 [2024-11-02 11:46:39.946414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.946457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.958911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ecc78 00:34:39.582 [2024-11-02 11:46:39.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.959957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:39.582 [2024-11-02 11:46:39.972384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f57b0 00:34:39.582 [2024-11-02 11:46:39.973465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.582 [2024-11-02 11:46:39.973493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:39.986223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e0a68 00:34:39.843 [2024-11-02 11:46:39.987335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:39.987365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:39.998608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ec408 00:34:39.843 [2024-11-02 11:46:39.999616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:39.999650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.012154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e12d8 00:34:39.843 [2024-11-02 11:46:40.013083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.013122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.025170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ebb98 00:34:39.843 [2024-11-02 11:46:40.026092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.026126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.040593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ebb98 00:34:39.843 [2024-11-02 11:46:40.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.041661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.054744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fac10 00:34:39.843 [2024-11-02 11:46:40.055738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.055771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.067445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166df118 00:34:39.843 [2024-11-02 11:46:40.068441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.068471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.081431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fb480 00:34:39.843 [2024-11-02 11:46:40.082403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.082432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.098122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fb480 00:34:39.843 [2024-11-02 11:46:40.099756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.099789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.111932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166eaab8 00:34:39.843 [2024-11-02 11:46:40.113565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.843 [2024-11-02 11:46:40.113609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:39.843 [2024-11-02 11:46:40.125626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e3060 00:34:39.844 [2024-11-02 11:46:40.127197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.127230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.139383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ea248 00:34:39.844 [2024-11-02 11:46:40.140957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.140991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.153209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166de8a8 00:34:39.844 [2024-11-02 11:46:40.154817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.154850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.165307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e99d8 00:34:39.844 [2024-11-02 11:46:40.166156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.166188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.178825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e73e0 00:34:39.844 [2024-11-02 11:46:40.179688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.179721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.192344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e9168 00:34:39.844 [2024-11-02 11:46:40.193172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.193203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.206049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fe2e8 00:34:39.844 [2024-11-02 11:46:40.206915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.206947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.218326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e88f8 00:34:39.844 [2024-11-02 11:46:40.219126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.219158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:39.844 [2024-11-02 11:46:40.235592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e6300 00:34:39.844 [2024-11-02 11:46:40.237244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:39.844 [2024-11-02 11:46:40.237282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.249301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e88f8 00:34:40.103 [2024-11-02 11:46:40.250956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.250996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.261918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e6b70 00:34:40.103 [2024-11-02 11:46:40.263458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.263488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.275783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166feb58 00:34:40.103 [2024-11-02 11:46:40.277289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.277322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.289501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e73e0 00:34:40.103 [2024-11-02 11:46:40.290991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.291023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.303126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1868 00:34:40.103 [2024-11-02 11:46:40.304582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.304612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.319331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f1868 00:34:40.103 [2024-11-02 11:46:40.321465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.321494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.332866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166de8a8 00:34:40.103 [2024-11-02 11:46:40.334974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.335007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.346455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e1f80 00:34:40.103 [2024-11-02 11:46:40.348576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.348605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.360108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e3060 00:34:40.103 [2024-11-02 11:46:40.362185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.362219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.373679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166e27f0 00:34:40.103 [2024-11-02 11:46:40.375747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.375783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.387203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fb480 00:34:40.103 [2024-11-02 11:46:40.389275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.389322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.400512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166df118 00:34:40.103 [2024-11-02 11:46:40.402613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.402645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.414284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166fac10 00:34:40.103 [2024-11-02 11:46:40.416344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.416386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.427922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f46d0 00:34:40.103 [2024-11-02 11:46:40.429946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.103 [2024-11-02 11:46:40.429978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.103 [2024-11-02 11:46:40.441477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166edd58 00:34:40.103 [2024-11-02 11:46:40.443538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.104 [2024-11-02 11:46:40.443581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.104 [2024-11-02 11:46:40.455143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f4f40 00:34:40.104 [2024-11-02 11:46:40.457135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.104 [2024-11-02 11:46:40.457167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.104 [2024-11-02 11:46:40.468677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ed4e8 00:34:40.104 [2024-11-02 11:46:40.470713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.104 [2024-11-02 11:46:40.470745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.104 [2024-11-02 11:46:40.482273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f57b0 00:34:40.104 [2024-11-02 11:46:40.484231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.104 [2024-11-02 11:46:40.484270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.104 [2024-11-02 11:46:40.494103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ecc78 00:34:40.104 [2024-11-02 11:46:40.495467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.104 [2024-11-02 11:46:40.495496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.507849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166ee5c8 00:34:40.363 [2024-11-02 11:46:40.509186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.509220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.521392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f7538 00:34:40.363 [2024-11-02 11:46:40.522719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.522752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.534908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166eee38 00:34:40.363 [2024-11-02 11:46:40.536240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.536280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.548459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f8618 00:34:40.363 [2024-11-02 11:46:40.549749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.549781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.560900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166de038 00:34:40.363 [2024-11-02 11:46:40.562155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.562187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.574510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f8e88 00:34:40.363 [2024-11-02 11:46:40.575773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.575806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.588058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166df988 00:34:40.363 [2024-11-02 11:46:40.589309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.589338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.363 [2024-11-02 11:46:40.601548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18212c0) with pdu=0x2000166f96f8 00:34:40.363 [2024-11-02 11:46:40.602791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.363 [2024-11-02 11:46:40.602830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.363 18633.50 IOPS, 72.79 MiB/s 00:34:40.363 Latency(us) 00:34:40.363 [2024-11-02T10:46:40.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.363 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.363 nvme0n1 : 2.01 18636.59 72.80 0.00 0.00 6857.19 2815.62 18350.08 00:34:40.363 [2024-11-02T10:46:40.765Z] =================================================================================================================== 00:34:40.363 [2024-11-02T10:46:40.765Z] Total : 18636.59 72.80 0.00 0.00 6857.19 2815.62 18350.08 00:34:40.363 { 00:34:40.363 "results": [ 00:34:40.363 { 00:34:40.363 "job": "nvme0n1", 00:34:40.363 "core_mask": "0x2", 00:34:40.363 "workload": "randwrite", 00:34:40.363 "status": "finished", 00:34:40.363 "queue_depth": 128, 00:34:40.363 "io_size": 4096, 00:34:40.363 "runtime": 2.009971, 00:34:40.363 "iops": 18636.587294045537, 00:34:40.363 "mibps": 72.79916911736538, 00:34:40.363 "io_failed": 0, 00:34:40.363 "io_timeout": 0, 00:34:40.363 "avg_latency_us": 6857.191433043337, 00:34:40.363 "min_latency_us": 2815.6207407407405, 00:34:40.363 "max_latency_us": 18350.08 00:34:40.363 } 00:34:40.363 ], 00:34:40.363 "core_count": 1 00:34:40.363 } 00:34:40.363 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:40.363 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:40.363 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:40.363 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:40.363 | .driver_specific 00:34:40.363 | .nvme_error 00:34:40.363 | .status_code 00:34:40.363 | .command_transient_transport_error' 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3974702 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3974702 ']' 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3974702 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3974702 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3974702' 00:34:40.622 killing process with pid 3974702 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3974702 00:34:40.622 Received shutdown signal, test time was about 2.000000 seconds 00:34:40.622 00:34:40.622 Latency(us) 00:34:40.622 [2024-11-02T10:46:41.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.622 [2024-11-02T10:46:41.024Z] =================================================================================================================== 00:34:40.622 [2024-11-02T10:46:41.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:40.622 11:46:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3974702 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3975105 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3975105 /var/tmp/bperf.sock 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3975105 ']' 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:40.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:40.880 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:40.880 [2024-11-02 11:46:41.182486] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:40.880 [2024-11-02 11:46:41.182594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975105 ] 00:34:40.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:40.880 Zero copy mechanism will not be used. 00:34:40.880 [2024-11-02 11:46:41.253915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.139 [2024-11-02 11:46:41.304760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.139 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:41.139 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:34:41.139 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:41.139 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:41.397 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:41.397 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.397 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:41.397 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.397 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:41.397 11:46:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:41.966 nvme0n1 00:34:41.966 11:46:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:41.966 11:46:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.966 11:46:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:41.966 11:46:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.966 11:46:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:41.966 11:46:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:41.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:41.966 Zero copy mechanism will not be used. 00:34:41.966 Running I/O for 2 seconds... 00:34:41.966 [2024-11-02 11:46:42.341847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:41.966 [2024-11-02 11:46:42.342243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.966 [2024-11-02 11:46:42.342310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.966 [2024-11-02 11:46:42.353487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:41.966 [2024-11-02 11:46:42.353863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.966 [2024-11-02 11:46:42.353898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.966 [2024-11-02 11:46:42.363907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:41.966 [2024-11-02 11:46:42.364296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.966 [2024-11-02 11:46:42.364342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.375339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.375718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.375752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.385914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.386309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.386339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.395769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.395922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.395956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.406395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.406792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.406826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.416569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.416971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.417005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.427480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.427877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.436815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.436968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.436997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.446445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.446798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.446827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.454828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.455172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.455201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.463250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.463550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.463596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.472159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.472534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.472565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.480659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.480998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.481028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.489952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.490253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.490307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.499155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.499531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.499562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.508169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.508542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.508588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.516639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.516887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.516917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.525592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.525850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.525881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.533912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.534197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.534228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.542017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.542387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.551036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.551399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.551430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.558959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.559219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.559250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.567121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.567435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.567471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.575052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.575316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.583219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.583573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.583603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.591616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.591906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.591936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.227 [2024-11-02 11:46:42.600745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.227 [2024-11-02 11:46:42.601028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.227 [2024-11-02 11:46:42.601059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.228 [2024-11-02 11:46:42.610268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.228 [2024-11-02 11:46:42.610585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.228 [2024-11-02 11:46:42.610617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.228 [2024-11-02 11:46:42.618909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.228 [2024-11-02 11:46:42.619272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.228 [2024-11-02 11:46:42.619303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.628002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.628331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.628363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.636340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.636658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.636689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.644686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.644950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.644981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.652962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.653324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.653354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.660553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.660810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.660840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.669045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.669367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.669398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.678165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.678525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.678556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.685700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.686028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.693644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.693980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.694011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.702458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.702729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.702760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.711659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.712022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.712060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.720983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.721305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.721335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.730374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.730654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.739669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.749079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.749460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.749491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.758346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.758655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.758686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.767709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.768047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.768078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.777309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.777645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.777676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.786291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.786554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.786585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.795457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.804345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.489 [2024-11-02 11:46:42.804653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.489 [2024-11-02 11:46:42.804683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.489 [2024-11-02 11:46:42.813362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.813716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.813746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.822767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.823187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.832011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.832351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.832382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.840939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.841219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.841250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.849799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.850090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.850121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.857902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.858163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.858194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.866337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.866678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.866708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.875389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.875695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.875725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.490 [2024-11-02 11:46:42.884039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.490 [2024-11-02 11:46:42.884379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.490 [2024-11-02 11:46:42.884411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.892671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.892963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.892995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.901465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.901837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.901867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.910821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.911104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.911134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.918856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.919297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.919342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.927477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.927799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.927829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.935861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.936212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.936245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.945091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.945428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.945466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.954716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.955007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.955038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.964040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.964392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.964422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.971835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.972116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.972145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.980865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.981190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.981220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.990368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.990688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.990717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:42.998804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:42.999166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:42.999197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.007134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.007440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.007469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.015889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.016214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.016243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.025060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.025433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.025463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.033046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.033461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.033491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.042378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.042710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.042740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.051336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.051668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.051697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.060414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.060782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.060812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.069067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.069412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.069443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.077534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.077825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.077854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.086434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.086729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.751 [2024-11-02 11:46:43.086758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.751 [2024-11-02 11:46:43.095000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.751 [2024-11-02 11:46:43.095322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.095352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.752 [2024-11-02 11:46:43.103376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.752 [2024-11-02 11:46:43.103671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.103700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.752 [2024-11-02 11:46:43.111789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.752 [2024-11-02 11:46:43.112109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.112141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:42.752 [2024-11-02 11:46:43.119775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.752 [2024-11-02 11:46:43.120108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.120139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:42.752 [2024-11-02 11:46:43.128055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.752 [2024-11-02 11:46:43.128440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.128472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.752 [2024-11-02 11:46:43.136935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.752 [2024-11-02 11:46:43.137317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.137348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:42.752 [2024-11-02 11:46:43.145328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:42.752 [2024-11-02 11:46:43.145700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.752 [2024-11-02 11:46:43.145729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.154072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.154364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.154397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.161960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.162324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.162355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.170921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.171177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.171214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.179290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.179565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.188236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.188519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.188550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.196855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.197140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.197169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.206243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.206582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.206612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.215456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.215714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.215745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.013 [2024-11-02 11:46:43.223851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.013 [2024-11-02 11:46:43.224204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.013 [2024-11-02 11:46:43.224234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.232059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.232391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.232422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.240946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.241411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.241441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.250530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.250941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.250986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.259713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.260059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.260088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.268783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.269101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.269132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.276885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.277229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.277267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.285364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.285725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.293443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.293704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.293749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.302172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.302562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.302592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.310831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.311116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.311147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.318709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.318968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.318999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.327449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.327765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.327796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.014 3473.00 IOPS, 434.12 MiB/s [2024-11-02T10:46:43.416Z] [2024-11-02 11:46:43.337045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.337322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.337352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.345648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.345900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.345929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.354160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.354503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.354534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.361807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.362176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.362206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.370768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.371089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.371119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.378828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.379194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.379224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.387455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.387747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.387777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.395161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.395413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.395444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.403901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.404270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.404299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.014 [2024-11-02 11:46:43.412680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.014 [2024-11-02 11:46:43.413000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.014 [2024-11-02 11:46:43.413031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.421121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.421451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.421481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.429085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.429445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.429474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.437580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.437944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.437973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.445893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.446203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.446234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.454233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.454574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.454604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.462947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.463301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.463331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.472325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.472642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.472672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.480820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.481143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.481172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.489516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.489745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.489776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.497989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.498279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.498317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.507059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.507366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.507397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.515444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.515716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.515746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.523940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.524235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.524273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.532351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.532645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.532674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.540575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.540874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.540911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.548915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.549206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.549236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.557796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.558101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.558129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.565725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.566040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.566072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.574158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.574421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.574449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.276 [2024-11-02 11:46:43.581567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.276 [2024-11-02 11:46:43.581886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.276 [2024-11-02 11:46:43.581915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.589970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.590231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.590267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.598562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.598858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.598887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.607002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.607324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.607354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.614445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.614773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.614802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.622311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.622596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.622624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.630499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.630679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.630707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.638605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.638898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.638926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.647012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.647266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.647295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.655001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.655344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.655372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.663351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.663619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.663648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.277 [2024-11-02 11:46:43.671740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.277 [2024-11-02 11:46:43.672082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.277 [2024-11-02 11:46:43.672111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.680436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.680757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.680786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.689537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.689772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.689801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.697465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.697721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.697749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.705817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.706105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.706133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.715058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.715384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.715412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.723233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.723554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.723583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.731724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.731981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.732009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.739455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.739746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.739774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.748359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.748651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.748679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.756822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.757137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.757172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.765276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.765556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.765584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.774469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.774852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.783776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.784101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.538 [2024-11-02 11:46:43.784129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.538 [2024-11-02 11:46:43.791927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.538 [2024-11-02 11:46:43.792183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.792212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.799769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.800025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.800053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.808871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.809160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.809189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.816721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.817052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.825663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.825966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.825994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.834533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.834825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.834853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.842741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.842999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.843027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.851433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.851641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.851668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.859971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.860265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.860293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.868827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.869191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.869219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.877789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.878047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.878075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.886049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.886403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.886431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.894909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.895227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.895262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.904532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.904801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.904836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.913828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.914030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.914059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.922792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.922983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.923012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.539 [2024-11-02 11:46:43.931328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.539 [2024-11-02 11:46:43.931572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.539 [2024-11-02 11:46:43.931600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.939659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.939893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.939924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.947861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.948097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.948127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.955750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.955941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.955969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.964899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.965191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.965220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.973875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.974072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.974100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.982117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.982341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.989661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.989889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.989917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:43.998414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:43.998728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:43.998756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:44.006740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:44.006920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:44.006949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:44.014428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:44.014637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:44.014666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:44.022651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:44.022834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:44.022862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.800 [2024-11-02 11:46:44.030501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.800 [2024-11-02 11:46:44.030667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.800 [2024-11-02 11:46:44.030696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.038231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.038565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.047097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.047391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.047420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.055777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.056012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.056040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.063934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.064132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.064160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.071388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.079075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.079280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.079309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.087715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.087946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.087974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.096055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.096360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.096389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.104850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.105110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.105139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.113216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.113481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.113510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.122438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.122672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.130077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.130312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.130341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.137827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.138095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.145740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.146034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.146061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.154882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.155223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.155252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.162897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.163125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.163154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.171341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.171510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.171538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.179696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.179927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.179956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.188829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.189090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.189119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:43.801 [2024-11-02 11:46:44.196441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:43.801 [2024-11-02 11:46:44.196675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.801 [2024-11-02 11:46:44.196704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.205450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.205651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.205681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.213204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.213467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.213496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.221675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.221931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.221960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.230282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.230549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.230578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.238566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.238889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.238918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.246595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.246903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.246931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.255233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.255505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.255533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.263490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.263703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.263731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.272023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.272226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.060 [2024-11-02 11:46:44.272254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.060 [2024-11-02 11:46:44.280781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.060 [2024-11-02 11:46:44.280975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.281003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.061 [2024-11-02 11:46:44.289264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.061 [2024-11-02 11:46:44.289517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.289546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.061 [2024-11-02 11:46:44.297690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.061 [2024-11-02 11:46:44.297871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.297899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.061 [2024-11-02 11:46:44.306039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.061 [2024-11-02 11:46:44.306221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.306251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:44.061 [2024-11-02 11:46:44.315196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.061 [2024-11-02 11:46:44.315512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.315542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.061 [2024-11-02 11:46:44.324061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.061 [2024-11-02 11:46:44.324397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.324426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:44.061 3575.50 IOPS, 446.94 MiB/s [2024-11-02T10:46:44.463Z] [2024-11-02 11:46:44.333932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1821600) with pdu=0x2000166fef90 00:34:44.061 [2024-11-02 11:46:44.334184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:44.061 [2024-11-02 11:46:44.334213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:44.061 00:34:44.061 Latency(us) 00:34:44.061 [2024-11-02T10:46:44.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.061 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:44.061 nvme0n1 : 2.01 3573.02 446.63 0.00 0.00 4466.78 3252.53 11845.03 00:34:44.061 [2024-11-02T10:46:44.463Z] =================================================================================================================== 00:34:44.061 [2024-11-02T10:46:44.463Z] Total : 3573.02 446.63 0.00 0.00 4466.78 3252.53 11845.03 00:34:44.061 { 00:34:44.061 "results": [ 00:34:44.061 { 00:34:44.061 "job": "nvme0n1", 00:34:44.061 "core_mask": "0x2", 00:34:44.061 "workload": "randwrite", 00:34:44.061 "status": "finished", 00:34:44.061 "queue_depth": 16, 00:34:44.061 "io_size": 131072, 00:34:44.061 "runtime": 2.005584, 00:34:44.061 "iops": 3573.0241166662677, 00:34:44.061 "mibps": 446.62801458328346, 00:34:44.061 "io_failed": 0, 00:34:44.061 "io_timeout": 0, 00:34:44.061 "avg_latency_us": 4466.7836669044145, 00:34:44.061 "min_latency_us": 3252.5274074074073, 00:34:44.061 "max_latency_us": 11845.025185185184 00:34:44.061 } 00:34:44.061 ], 00:34:44.061 "core_count": 1 00:34:44.061 } 00:34:44.061 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:44.061 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:44.061 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:44.061 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:44.061 | .driver_specific 00:34:44.061 | .nvme_error 00:34:44.061 | .status_code 00:34:44.061 | .command_transient_transport_error' 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3975105 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3975105 ']' 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3975105 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3975105 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3975105' 00:34:44.320 killing process with pid 3975105 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3975105 00:34:44.320 Received shutdown signal, test time was about 2.000000 seconds 00:34:44.320 00:34:44.320 Latency(us) 00:34:44.320 [2024-11-02T10:46:44.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.320 [2024-11-02T10:46:44.722Z] =================================================================================================================== 00:34:44.320 [2024-11-02T10:46:44.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.320 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3975105 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3973743 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3973743 ']' 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3973743 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3973743 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3973743' 00:34:44.579 killing process with pid 3973743 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3973743 00:34:44.579 11:46:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3973743 00:34:44.838 00:34:44.838 real 0m15.530s 00:34:44.838 user 0m31.009s 00:34:44.838 sys 0m4.186s 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:44.838 ************************************ 00:34:44.838 END TEST nvmf_digest_error 00:34:44.838 ************************************ 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.838 rmmod nvme_tcp 00:34:44.838 rmmod nvme_fabrics 00:34:44.838 rmmod nvme_keyring 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3973743 ']' 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3973743 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3973743 ']' 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3973743 00:34:44.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3973743) - No such process 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3973743 is not found' 00:34:44.838 Process with pid 3973743 is not found 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.838 11:46:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.377 00:34:47.377 real 0m35.348s 00:34:47.377 user 1m2.257s 00:34:47.377 sys 0m10.047s 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.377 ************************************ 00:34:47.377 END TEST nvmf_digest 00:34:47.377 ************************************ 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.377 ************************************ 00:34:47.377 START TEST nvmf_bdevperf 00:34:47.377 ************************************ 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:47.377 * Looking for test storage... 00:34:47.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.377 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.378 --rc genhtml_branch_coverage=1 00:34:47.378 --rc genhtml_function_coverage=1 00:34:47.378 --rc genhtml_legend=1 00:34:47.378 --rc geninfo_all_blocks=1 00:34:47.378 --rc geninfo_unexecuted_blocks=1 00:34:47.378 00:34:47.378 ' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.378 --rc genhtml_branch_coverage=1 00:34:47.378 --rc genhtml_function_coverage=1 00:34:47.378 --rc genhtml_legend=1 00:34:47.378 --rc geninfo_all_blocks=1 00:34:47.378 --rc geninfo_unexecuted_blocks=1 00:34:47.378 00:34:47.378 ' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.378 --rc genhtml_branch_coverage=1 00:34:47.378 --rc genhtml_function_coverage=1 00:34:47.378 --rc genhtml_legend=1 00:34:47.378 --rc geninfo_all_blocks=1 00:34:47.378 --rc geninfo_unexecuted_blocks=1 00:34:47.378 00:34:47.378 ' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:47.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.378 --rc genhtml_branch_coverage=1 00:34:47.378 --rc genhtml_function_coverage=1 00:34:47.378 --rc genhtml_legend=1 00:34:47.378 --rc geninfo_all_blocks=1 00:34:47.378 --rc geninfo_unexecuted_blocks=1 00:34:47.378 00:34:47.378 ' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:47.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.378 11:46:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:49.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:49.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:49.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:49.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:49.284 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:49.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:34:49.285 00:34:49.285 --- 10.0.0.2 ping statistics --- 00:34:49.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.285 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:34:49.285 00:34:49.285 --- 10.0.0.1 ping statistics --- 00:34:49.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.285 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3977581 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3977581 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3977581 ']' 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:49.285 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.285 [2024-11-02 11:46:49.590226] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:49.285 [2024-11-02 11:46:49.590330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.285 [2024-11-02 11:46:49.664080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:49.543 [2024-11-02 11:46:49.710442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.543 [2024-11-02 11:46:49.710494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.543 [2024-11-02 11:46:49.710518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.543 [2024-11-02 11:46:49.710544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.543 [2024-11-02 11:46:49.710554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.543 [2024-11-02 11:46:49.712028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.543 [2024-11-02 11:46:49.712092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:49.543 [2024-11-02 11:46:49.712095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.543 [2024-11-02 11:46:49.850367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.543 Malloc0 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:49.543 [2024-11-02 11:46:49.910213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:49.543 { 00:34:49.543 "params": { 00:34:49.543 "name": "Nvme$subsystem", 00:34:49.543 "trtype": "$TEST_TRANSPORT", 00:34:49.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.543 "adrfam": "ipv4", 00:34:49.543 "trsvcid": "$NVMF_PORT", 00:34:49.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.543 "hdgst": ${hdgst:-false}, 00:34:49.543 "ddgst": ${ddgst:-false} 00:34:49.543 }, 00:34:49.543 "method": "bdev_nvme_attach_controller" 00:34:49.543 } 00:34:49.543 EOF 00:34:49.543 )") 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:34:49.543 11:46:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:49.543 "params": { 00:34:49.543 "name": "Nvme1", 00:34:49.543 "trtype": "tcp", 00:34:49.543 "traddr": "10.0.0.2", 00:34:49.543 "adrfam": "ipv4", 00:34:49.543 "trsvcid": "4420", 00:34:49.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:49.543 "hdgst": false, 00:34:49.543 "ddgst": false 00:34:49.543 }, 00:34:49.543 "method": "bdev_nvme_attach_controller" 00:34:49.543 }' 00:34:49.801 [2024-11-02 11:46:49.958430] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:49.801 [2024-11-02 11:46:49.958510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977606 ] 00:34:49.801 [2024-11-02 11:46:50.030594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.801 [2024-11-02 11:46:50.078322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.059 Running I/O for 1 seconds... 00:34:50.999 8279.00 IOPS, 32.34 MiB/s 00:34:50.999 Latency(us) 00:34:50.999 [2024-11-02T10:46:51.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.999 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:50.999 Verification LBA range: start 0x0 length 0x4000 00:34:50.999 Nvme1n1 : 1.02 8362.26 32.67 0.00 0.00 15246.76 3070.48 18155.90 00:34:50.999 [2024-11-02T10:46:51.401Z] =================================================================================================================== 00:34:50.999 [2024-11-02T10:46:51.401Z] Total : 8362.26 32.67 0.00 0.00 15246.76 3070.48 18155.90 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3977784 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:51.258 { 00:34:51.258 "params": { 00:34:51.258 "name": "Nvme$subsystem", 00:34:51.258 "trtype": "$TEST_TRANSPORT", 00:34:51.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.258 "adrfam": "ipv4", 00:34:51.258 "trsvcid": "$NVMF_PORT", 00:34:51.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.258 "hdgst": ${hdgst:-false}, 00:34:51.258 "ddgst": ${ddgst:-false} 00:34:51.258 }, 00:34:51.258 "method": "bdev_nvme_attach_controller" 00:34:51.258 } 00:34:51.258 EOF 00:34:51.258 )") 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:34:51.258 11:46:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:51.258 "params": { 00:34:51.258 "name": "Nvme1", 00:34:51.258 "trtype": "tcp", 00:34:51.258 "traddr": "10.0.0.2", 00:34:51.258 "adrfam": "ipv4", 00:34:51.258 "trsvcid": "4420", 00:34:51.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:51.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:51.258 "hdgst": false, 00:34:51.258 "ddgst": false 00:34:51.258 }, 00:34:51.258 "method": "bdev_nvme_attach_controller" 00:34:51.258 }' 00:34:51.258 [2024-11-02 11:46:51.564087] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:51.258 [2024-11-02 11:46:51.564185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977784 ] 00:34:51.258 [2024-11-02 11:46:51.636003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.543 [2024-11-02 11:46:51.681948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.543 Running I/O for 15 seconds... 00:34:53.881 8260.00 IOPS, 32.27 MiB/s [2024-11-02T10:46:54.545Z] 8330.50 IOPS, 32.54 MiB/s [2024-11-02T10:46:54.545Z] 11:46:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3977581 00:34:54.143 11:46:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:54.143 [2024-11-02 11:46:54.525408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.525977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.525989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.143 [2024-11-02 11:46:54.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.143 [2024-11-02 11:46:54.526290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.526981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.526994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.144 [2024-11-02 11:46:54.527443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.144 [2024-11-02 11:46:54.527457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.145 [2024-11-02 11:46:54.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.527979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.527992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.145 [2024-11-02 11:46:54.528604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.145 [2024-11-02 11:46:54.528618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.528630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.528655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.528680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.528706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.528731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.528757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.528986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.528999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.529011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.529035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.529060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.529085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.529110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.146 [2024-11-02 11:46:54.529148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.146 [2024-11-02 11:46:54.529175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221d9f0 is same with the state(6) to be set 00:34:54.146 [2024-11-02 11:46:54.529201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:54.146 [2024-11-02 11:46:54.529211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:54.146 [2024-11-02 11:46:54.529221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46776 len:8 PRP1 0x0 PRP2 0x0 00:34:54.146 [2024-11-02 11:46:54.529233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.146 [2024-11-02 11:46:54.529402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.146 [2024-11-02 11:46:54.529431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.146 [2024-11-02 11:46:54.529458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.146 [2024-11-02 11:46:54.529484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.146 [2024-11-02 11:46:54.529497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.146 [2024-11-02 11:46:54.532647] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.146 [2024-11-02 11:46:54.532680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.146 [2024-11-02 11:46:54.533535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.146 [2024-11-02 11:46:54.533588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.146 [2024-11-02 11:46:54.533608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.146 [2024-11-02 11:46:54.533848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.146 [2024-11-02 11:46:54.534089] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.146 [2024-11-02 11:46:54.534112] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.146 [2024-11-02 11:46:54.534128] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.146 [2024-11-02 11:46:54.537919] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.406 [2024-11-02 11:46:54.546738] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.406 [2024-11-02 11:46:54.547286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.406 [2024-11-02 11:46:54.547320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.406 [2024-11-02 11:46:54.547339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.406 [2024-11-02 11:46:54.547576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.406 [2024-11-02 11:46:54.547825] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.406 [2024-11-02 11:46:54.547848] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.406 [2024-11-02 11:46:54.547863] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.406 [2024-11-02 11:46:54.551515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.406 [2024-11-02 11:46:54.560789] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.406 [2024-11-02 11:46:54.561288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.406 [2024-11-02 11:46:54.561321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.406 [2024-11-02 11:46:54.561339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.406 [2024-11-02 11:46:54.561577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.406 [2024-11-02 11:46:54.561818] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.406 [2024-11-02 11:46:54.561841] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.406 [2024-11-02 11:46:54.561856] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.406 [2024-11-02 11:46:54.565416] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.406 [2024-11-02 11:46:54.574652] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.406 [2024-11-02 11:46:54.575059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.406 [2024-11-02 11:46:54.575090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.406 [2024-11-02 11:46:54.575108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.406 [2024-11-02 11:46:54.575358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.406 [2024-11-02 11:46:54.575601] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.406 [2024-11-02 11:46:54.575624] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.406 [2024-11-02 11:46:54.575638] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.406 [2024-11-02 11:46:54.579188] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.406 [2024-11-02 11:46:54.588619] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.406 [2024-11-02 11:46:54.589093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.406 [2024-11-02 11:46:54.589125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.406 [2024-11-02 11:46:54.589142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.406 [2024-11-02 11:46:54.589420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.406 [2024-11-02 11:46:54.589677] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.406 [2024-11-02 11:46:54.589701] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.406 [2024-11-02 11:46:54.589715] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.406 [2024-11-02 11:46:54.593273] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.406 [2024-11-02 11:46:54.602481] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.406 [2024-11-02 11:46:54.602908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.602964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.603203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.603457] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.603481] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.603496] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.607047] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.616482] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.616934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.616960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.616991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.617245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.617496] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.617519] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.617535] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.621079] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.630296] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.630720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.630751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.630769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.631005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.631246] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.631292] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.631308] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.634853] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.644263] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.644701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.644743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.644759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.645030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.645289] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.645313] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.645328] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.648870] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.658103] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.658538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.658570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.658588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.658824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.659065] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.659088] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.659103] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.662674] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.672094] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.672519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.672547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.672563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.672807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.673049] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.673073] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.673087] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.676649] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.686060] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.686502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.686533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.686551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.686787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.687028] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.687051] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.687066] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.690631] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.700048] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.700489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.700521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.700539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.700776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.701018] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.701040] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.701055] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.704612] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.714036] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.714475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.714507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.714524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.714761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.715004] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.715026] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.715041] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.718625] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.407 [2024-11-02 11:46:54.728041] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.407 [2024-11-02 11:46:54.728480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.407 [2024-11-02 11:46:54.728512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.407 [2024-11-02 11:46:54.728530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.407 [2024-11-02 11:46:54.728767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.407 [2024-11-02 11:46:54.729008] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.407 [2024-11-02 11:46:54.729031] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.407 [2024-11-02 11:46:54.729045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.407 [2024-11-02 11:46:54.732602] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.408 [2024-11-02 11:46:54.742018] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.408 [2024-11-02 11:46:54.742453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.408 [2024-11-02 11:46:54.742484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.408 [2024-11-02 11:46:54.742502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.408 [2024-11-02 11:46:54.742739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.408 [2024-11-02 11:46:54.742979] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.408 [2024-11-02 11:46:54.743002] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.408 [2024-11-02 11:46:54.743017] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.408 [2024-11-02 11:46:54.746573] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.408 [2024-11-02 11:46:54.755992] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.408 [2024-11-02 11:46:54.756419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.408 [2024-11-02 11:46:54.756451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.408 [2024-11-02 11:46:54.756468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.408 [2024-11-02 11:46:54.756705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.408 [2024-11-02 11:46:54.756946] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.408 [2024-11-02 11:46:54.756969] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.408 [2024-11-02 11:46:54.756984] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.408 [2024-11-02 11:46:54.760569] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.408 [2024-11-02 11:46:54.769995] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.408 [2024-11-02 11:46:54.770399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.408 [2024-11-02 11:46:54.770432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.408 [2024-11-02 11:46:54.770450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.408 [2024-11-02 11:46:54.770687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.408 [2024-11-02 11:46:54.770928] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.408 [2024-11-02 11:46:54.770951] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.408 [2024-11-02 11:46:54.770966] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.408 [2024-11-02 11:46:54.774527] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.408 [2024-11-02 11:46:54.783947] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.408 [2024-11-02 11:46:54.784332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.408 [2024-11-02 11:46:54.784365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.408 [2024-11-02 11:46:54.784388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.408 [2024-11-02 11:46:54.784626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.408 [2024-11-02 11:46:54.784869] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.408 [2024-11-02 11:46:54.784892] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.408 [2024-11-02 11:46:54.784908] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.408 [2024-11-02 11:46:54.788463] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.408 [2024-11-02 11:46:54.797883] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.408 [2024-11-02 11:46:54.798299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.408 [2024-11-02 11:46:54.798331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.408 [2024-11-02 11:46:54.798349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.408 [2024-11-02 11:46:54.798586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.408 [2024-11-02 11:46:54.798827] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.408 [2024-11-02 11:46:54.798851] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.408 [2024-11-02 11:46:54.798866] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.408 [2024-11-02 11:46:54.802418] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.669 [2024-11-02 11:46:54.811865] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.669 [2024-11-02 11:46:54.812278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.669 [2024-11-02 11:46:54.812311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.669 [2024-11-02 11:46:54.812329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.669 [2024-11-02 11:46:54.812567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.669 [2024-11-02 11:46:54.812813] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.669 [2024-11-02 11:46:54.812836] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.669 [2024-11-02 11:46:54.812851] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.669 [2024-11-02 11:46:54.816427] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.669 [2024-11-02 11:46:54.825856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.669 [2024-11-02 11:46:54.826263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.669 [2024-11-02 11:46:54.826295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.826313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.826551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.826798] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.826821] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.826836] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.830391] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.839808] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.840229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.840268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.840288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.840525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.840767] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.840790] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.840804] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.844356] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.853773] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.854172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.854203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.854221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.854469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.854713] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.854736] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.854751] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.858305] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 7403.00 IOPS, 28.92 MiB/s [2024-11-02T10:46:55.072Z] [2024-11-02 11:46:54.869497] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.869935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.869967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.869985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.870222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.870479] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.870503] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.870524] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.874073] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.883548] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.884018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.884050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.884068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.884315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.884562] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.884585] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.884604] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.888165] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.897418] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.897820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.897852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.897870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.898107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.898360] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.898384] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.898399] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.901955] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.911407] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.911837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.911868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.911886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.912123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.912376] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.912400] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.912415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.915971] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.925430] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.925853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.925886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.925904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.926140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.926393] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.926417] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.926431] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.929984] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.939432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.939827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.939857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.939875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.940111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.940364] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.940389] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.940404] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.943961] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.953407] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.953804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.670 [2024-11-02 11:46:54.953835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.670 [2024-11-02 11:46:54.953853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.670 [2024-11-02 11:46:54.954089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.670 [2024-11-02 11:46:54.954344] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.670 [2024-11-02 11:46:54.954368] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.670 [2024-11-02 11:46:54.954383] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.670 [2024-11-02 11:46:54.957934] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.670 [2024-11-02 11:46:54.967408] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.670 [2024-11-02 11:46:54.967831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:54.967862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:54.967886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:54.968123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:54.968377] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:54.968401] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:54.968416] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:54.971970] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:54.981429] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:54.981828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:54.981860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:54.981878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:54.982114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:54.982370] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:54.982394] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:54.982409] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:54.985960] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:54.995414] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:54.995836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:54.995866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:54.995883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:54.996120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:54.996373] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:54.996397] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:54.996412] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:54.999959] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:55.009401] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:55.009823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:55.009854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:55.009872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:55.010109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:55.010373] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:55.010397] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:55.010412] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:55.013966] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:55.023403] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:55.023821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:55.023852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:55.023870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:55.024106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:55.024360] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:55.024385] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:55.024399] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:55.027948] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:55.037416] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:55.037842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:55.037874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:55.037891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:55.038128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:55.038384] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:55.038409] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:55.038424] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:55.041975] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:55.051412] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:55.051840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:55.051872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:55.051890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:55.052127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:55.052380] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:55.052405] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:55.052425] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:55.055973] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.671 [2024-11-02 11:46:55.065467] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.671 [2024-11-02 11:46:55.065902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.671 [2024-11-02 11:46:55.065934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.671 [2024-11-02 11:46:55.065952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.671 [2024-11-02 11:46:55.066189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.671 [2024-11-02 11:46:55.066443] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.671 [2024-11-02 11:46:55.066467] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.671 [2024-11-02 11:46:55.066483] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.671 [2024-11-02 11:46:55.070137] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.933 [2024-11-02 11:46:55.079486] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.933 [2024-11-02 11:46:55.079912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.933 [2024-11-02 11:46:55.079943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.933 [2024-11-02 11:46:55.079962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.933 [2024-11-02 11:46:55.080199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.933 [2024-11-02 11:46:55.080453] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.933 [2024-11-02 11:46:55.080477] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.933 [2024-11-02 11:46:55.080492] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.933 [2024-11-02 11:46:55.084041] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.933 [2024-11-02 11:46:55.093469] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.933 [2024-11-02 11:46:55.093890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.933 [2024-11-02 11:46:55.093921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.933 [2024-11-02 11:46:55.093939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.933 [2024-11-02 11:46:55.094176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.933 [2024-11-02 11:46:55.094430] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.933 [2024-11-02 11:46:55.094454] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.933 [2024-11-02 11:46:55.094468] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.933 [2024-11-02 11:46:55.098018] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.933 [2024-11-02 11:46:55.107376] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.933 [2024-11-02 11:46:55.107802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.933 [2024-11-02 11:46:55.107833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.933 [2024-11-02 11:46:55.107851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.933 [2024-11-02 11:46:55.108088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.933 [2024-11-02 11:46:55.108344] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.933 [2024-11-02 11:46:55.108369] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.933 [2024-11-02 11:46:55.108384] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.933 [2024-11-02 11:46:55.111933] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.933 [2024-11-02 11:46:55.121373] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.933 [2024-11-02 11:46:55.121808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.933 [2024-11-02 11:46:55.121839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.933 [2024-11-02 11:46:55.121856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.933 [2024-11-02 11:46:55.122093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.933 [2024-11-02 11:46:55.122347] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.933 [2024-11-02 11:46:55.122371] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.933 [2024-11-02 11:46:55.122386] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.125932] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.135365] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.135786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.135818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.135836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.136073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.136326] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.136350] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.136365] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.139910] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.149336] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.149765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.149796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.149819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.150057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.150311] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.150335] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.150350] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.153900] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.163348] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.163742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.163773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.163791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.164028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.164281] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.164305] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.164320] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.167888] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.177324] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.177747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.177778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.177795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.178032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.178286] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.178310] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.178325] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.181869] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.191306] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.191702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.191732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.191750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.191986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.192234] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.192267] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.192284] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.195835] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.205250] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.205678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.205708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.205725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.205962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.206203] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.206226] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.206241] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.209804] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.219225] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.219648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.219680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.219698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.219934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.220175] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.220198] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.220213] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.223773] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.233198] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.233601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.233634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.233652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.233889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.234130] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.234153] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.234174] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.237735] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.247161] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.247589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.247620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.247637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.247874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.248116] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.248139] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.248153] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.934 [2024-11-02 11:46:55.251710] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.934 [2024-11-02 11:46:55.261141] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.934 [2024-11-02 11:46:55.261588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.934 [2024-11-02 11:46:55.261619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.934 [2024-11-02 11:46:55.261638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.934 [2024-11-02 11:46:55.261875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.934 [2024-11-02 11:46:55.262128] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.934 [2024-11-02 11:46:55.262152] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.934 [2024-11-02 11:46:55.262167] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.935 [2024-11-02 11:46:55.265727] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.935 [2024-11-02 11:46:55.274969] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.935 [2024-11-02 11:46:55.275357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.935 [2024-11-02 11:46:55.275389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.935 [2024-11-02 11:46:55.275407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.935 [2024-11-02 11:46:55.275644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.935 [2024-11-02 11:46:55.275885] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.935 [2024-11-02 11:46:55.275908] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.935 [2024-11-02 11:46:55.275923] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.935 [2024-11-02 11:46:55.279488] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.935 [2024-11-02 11:46:55.288917] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.935 [2024-11-02 11:46:55.289349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.935 [2024-11-02 11:46:55.289380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.935 [2024-11-02 11:46:55.289399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.935 [2024-11-02 11:46:55.289636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.935 [2024-11-02 11:46:55.289878] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.935 [2024-11-02 11:46:55.289901] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.935 [2024-11-02 11:46:55.289917] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.935 [2024-11-02 11:46:55.293508] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.935 [2024-11-02 11:46:55.302940] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.935 [2024-11-02 11:46:55.303344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.935 [2024-11-02 11:46:55.303375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.935 [2024-11-02 11:46:55.303393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.935 [2024-11-02 11:46:55.303630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.935 [2024-11-02 11:46:55.303872] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.935 [2024-11-02 11:46:55.303895] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.935 [2024-11-02 11:46:55.303910] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.935 [2024-11-02 11:46:55.307466] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.935 [2024-11-02 11:46:55.316891] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.935 [2024-11-02 11:46:55.317286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.935 [2024-11-02 11:46:55.317318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.935 [2024-11-02 11:46:55.317346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.935 [2024-11-02 11:46:55.317587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.935 [2024-11-02 11:46:55.317829] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.935 [2024-11-02 11:46:55.317852] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.935 [2024-11-02 11:46:55.317867] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:54.935 [2024-11-02 11:46:55.321436] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:54.935 [2024-11-02 11:46:55.330971] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:54.935 [2024-11-02 11:46:55.331353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.935 [2024-11-02 11:46:55.331385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:54.935 [2024-11-02 11:46:55.331410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:54.935 [2024-11-02 11:46:55.331649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:54.935 [2024-11-02 11:46:55.331890] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:54.935 [2024-11-02 11:46:55.331913] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:54.935 [2024-11-02 11:46:55.331928] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.335583] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.344911] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.345322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.345354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.195 [2024-11-02 11:46:55.345372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.195 [2024-11-02 11:46:55.345610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.195 [2024-11-02 11:46:55.345852] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.195 [2024-11-02 11:46:55.345875] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.195 [2024-11-02 11:46:55.345890] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.349453] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.358887] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.359317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.359349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.195 [2024-11-02 11:46:55.359367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.195 [2024-11-02 11:46:55.359605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.195 [2024-11-02 11:46:55.359846] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.195 [2024-11-02 11:46:55.359869] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.195 [2024-11-02 11:46:55.359884] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.363457] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.372879] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.373308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.373341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.195 [2024-11-02 11:46:55.373359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.195 [2024-11-02 11:46:55.373596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.195 [2024-11-02 11:46:55.373846] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.195 [2024-11-02 11:46:55.373869] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.195 [2024-11-02 11:46:55.373884] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.377449] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.386873] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.387281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.387313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.195 [2024-11-02 11:46:55.387331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.195 [2024-11-02 11:46:55.387568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.195 [2024-11-02 11:46:55.387809] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.195 [2024-11-02 11:46:55.387832] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.195 [2024-11-02 11:46:55.387848] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.391407] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.400826] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.401220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.401251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.195 [2024-11-02 11:46:55.401281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.195 [2024-11-02 11:46:55.401519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.195 [2024-11-02 11:46:55.401759] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.195 [2024-11-02 11:46:55.401782] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.195 [2024-11-02 11:46:55.401797] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.405353] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.414773] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.415190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.415221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.195 [2024-11-02 11:46:55.415239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.195 [2024-11-02 11:46:55.415486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.195 [2024-11-02 11:46:55.415728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.195 [2024-11-02 11:46:55.415751] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.195 [2024-11-02 11:46:55.415772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.195 [2024-11-02 11:46:55.419329] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.195 [2024-11-02 11:46:55.428758] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.195 [2024-11-02 11:46:55.429190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.195 [2024-11-02 11:46:55.429222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.429239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.429486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.429728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.429751] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.429766] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.433324] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.442752] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.443146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.443178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.443196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.443444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.443687] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.443710] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.443725] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.447283] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.456707] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.457125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.457156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.457174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.457423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.457665] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.457688] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.457703] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.461250] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.470708] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.471132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.471162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.471180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.471430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.471673] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.471696] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.471711] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.475274] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.484704] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.485110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.485142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.485159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.485405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.485647] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.485670] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.485684] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.489241] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.498682] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.499182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.499237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.499266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.499508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.499750] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.499773] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.499787] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.503344] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.512566] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.512989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.513020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.513043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.513294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.513536] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.513559] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.513574] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.517121] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.526551] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.526984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.527015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.527032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.527279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.527521] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.527544] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.527559] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.531109] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.540545] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.540941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.540972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.540991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.541228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.541480] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.541504] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.541519] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.545069] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.554586] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.554990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.555021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.196 [2024-11-02 11:46:55.555039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.196 [2024-11-02 11:46:55.555287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.196 [2024-11-02 11:46:55.555535] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.196 [2024-11-02 11:46:55.555559] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.196 [2024-11-02 11:46:55.555573] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.196 [2024-11-02 11:46:55.559122] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.196 [2024-11-02 11:46:55.568583] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.196 [2024-11-02 11:46:55.568977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.196 [2024-11-02 11:46:55.569009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.197 [2024-11-02 11:46:55.569027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.197 [2024-11-02 11:46:55.569274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.197 [2024-11-02 11:46:55.569528] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.197 [2024-11-02 11:46:55.569552] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.197 [2024-11-02 11:46:55.569568] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.197 [2024-11-02 11:46:55.573119] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.197 [2024-11-02 11:46:55.582564] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.197 [2024-11-02 11:46:55.582986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.197 [2024-11-02 11:46:55.583017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.197 [2024-11-02 11:46:55.583034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.197 [2024-11-02 11:46:55.583282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.197 [2024-11-02 11:46:55.583524] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.197 [2024-11-02 11:46:55.583548] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.197 [2024-11-02 11:46:55.583563] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.197 [2024-11-02 11:46:55.587113] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.596502] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.597055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.597094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.597114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.597367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.597610] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.597634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.597655] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.601298] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.610330] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.610765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.610796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.610814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.611052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.611305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.611329] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.611344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.614896] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.624335] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.624765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.624796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.624813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.625050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.625308] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.625333] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.625347] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.628920] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.638151] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.638571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.638603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.638621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.638858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.639099] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.639123] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.639137] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.642704] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.652143] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.652547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.652579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.652597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.652835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.653077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.653100] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.653115] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.656679] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.666110] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.666548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.666579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.666597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.666834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.667075] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.667098] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.667113] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.670666] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.680079] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.680517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.680549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.680566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.680803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.681045] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.681068] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.681083] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.458 [2024-11-02 11:46:55.684646] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.458 [2024-11-02 11:46:55.694089] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.458 [2024-11-02 11:46:55.694526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.458 [2024-11-02 11:46:55.694557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.458 [2024-11-02 11:46:55.694586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.458 [2024-11-02 11:46:55.694824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.458 [2024-11-02 11:46:55.695066] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.458 [2024-11-02 11:46:55.695089] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.458 [2024-11-02 11:46:55.695104] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.698662] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.708105] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.708516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.708558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.708576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.708813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.709055] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.709078] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.709093] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.712651] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.722073] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.722479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.722510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.722528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.722765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.723006] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.723029] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.723045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.726607] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.736026] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.736430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.736461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.736479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.736716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.736964] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.736987] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.737002] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.740561] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.749978] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.750406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.750438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.750455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.750692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.750934] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.750957] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.750972] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.754532] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.763964] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.764383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.764415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.764433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.764670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.764911] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.764934] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.764949] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.768507] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.777937] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.778359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.778390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.778407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.778644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.778886] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.778909] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.778929] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.782490] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.791911] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.792317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.792348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.792366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.792603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.792845] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.792869] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.792883] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.796438] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.805861] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.806265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.806297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.806314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.806552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.806793] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.806816] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.806831] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.810385] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.819806] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.820228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.820266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.820286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.820523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.820764] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.820788] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.820802] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.824358] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.459 [2024-11-02 11:46:55.833798] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.459 [2024-11-02 11:46:55.834194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.459 [2024-11-02 11:46:55.834225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.459 [2024-11-02 11:46:55.834243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.459 [2024-11-02 11:46:55.834492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.459 [2024-11-02 11:46:55.834734] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.459 [2024-11-02 11:46:55.834757] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.459 [2024-11-02 11:46:55.834772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.459 [2024-11-02 11:46:55.838326] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.460 [2024-11-02 11:46:55.847757] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.460 [2024-11-02 11:46:55.848180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.460 [2024-11-02 11:46:55.848211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.460 [2024-11-02 11:46:55.848229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.460 [2024-11-02 11:46:55.848478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.460 [2024-11-02 11:46:55.848721] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.460 [2024-11-02 11:46:55.848744] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.460 [2024-11-02 11:46:55.848759] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.460 [2024-11-02 11:46:55.852317] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.720 [2024-11-02 11:46:55.861728] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.720 [2024-11-02 11:46:55.862127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.720 [2024-11-02 11:46:55.862159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.720 [2024-11-02 11:46:55.862177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.720 [2024-11-02 11:46:55.862428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.720 [2024-11-02 11:46:55.862671] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.720 [2024-11-02 11:46:55.862694] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.720 [2024-11-02 11:46:55.862709] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.720 [2024-11-02 11:46:55.866371] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.720 5552.25 IOPS, 21.69 MiB/s [2024-11-02T10:46:56.122Z] [2024-11-02 11:46:55.875632] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.720 [2024-11-02 11:46:55.876060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.720 [2024-11-02 11:46:55.876092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.720 [2024-11-02 11:46:55.876117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.720 [2024-11-02 11:46:55.876369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.720 [2024-11-02 11:46:55.876611] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.720 [2024-11-02 11:46:55.876634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.720 [2024-11-02 11:46:55.876649] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.720 [2024-11-02 11:46:55.880195] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.720 [2024-11-02 11:46:55.889628] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.720 [2024-11-02 11:46:55.890049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.720 [2024-11-02 11:46:55.890081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.720 [2024-11-02 11:46:55.890099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.720 [2024-11-02 11:46:55.890351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.720 [2024-11-02 11:46:55.890594] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.720 [2024-11-02 11:46:55.890617] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.890632] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.894181] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.903620] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.904012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.904043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.904061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.904310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.904555] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.904579] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.904594] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.908142] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.917581] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.918005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.918035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.918053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.918301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.918554] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.918577] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.918593] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.922140] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.931580] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.932002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.932032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.932050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.932300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.932545] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.932568] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.932583] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.936160] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.945592] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.946024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.946055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.946073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.946323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.946565] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.946587] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.946602] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.950145] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.959578] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.960001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.960033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.960051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.960299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.960543] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.960566] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.960587] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.964137] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.973582] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.973958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.973990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.974008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.974246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.974504] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.974527] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.974543] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.978089] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:55.987520] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:55.987928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:55.987960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:55.987978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:55.988215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:55.988467] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:55.988491] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:55.988506] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:55.992055] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:56.001496] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:56.001927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:56.001966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:56.001984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:56.002220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:56.002472] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:56.002496] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:56.002511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:56.006056] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:56.015494] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:56.015923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:56.015956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:56.015974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:56.016211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:56.016464] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.721 [2024-11-02 11:46:56.016488] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.721 [2024-11-02 11:46:56.016503] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.721 [2024-11-02 11:46:56.020051] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.721 [2024-11-02 11:46:56.029484] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.721 [2024-11-02 11:46:56.029892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.721 [2024-11-02 11:46:56.029923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.721 [2024-11-02 11:46:56.029940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.721 [2024-11-02 11:46:56.030177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.721 [2024-11-02 11:46:56.030427] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.030451] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.030466] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.034008] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.722 [2024-11-02 11:46:56.043456] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.722 [2024-11-02 11:46:56.043888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.722 [2024-11-02 11:46:56.043923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.722 [2024-11-02 11:46:56.043941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.722 [2024-11-02 11:46:56.044177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.722 [2024-11-02 11:46:56.044428] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.044452] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.044467] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.048021] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.722 [2024-11-02 11:46:56.057451] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.722 [2024-11-02 11:46:56.057846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.722 [2024-11-02 11:46:56.057882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.722 [2024-11-02 11:46:56.057901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.722 [2024-11-02 11:46:56.058138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.722 [2024-11-02 11:46:56.058390] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.058414] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.058430] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.061981] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.722 [2024-11-02 11:46:56.071489] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.722 [2024-11-02 11:46:56.071912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.722 [2024-11-02 11:46:56.071945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.722 [2024-11-02 11:46:56.071962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.722 [2024-11-02 11:46:56.072200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.722 [2024-11-02 11:46:56.072450] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.072474] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.072489] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.076055] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.722 [2024-11-02 11:46:56.085492] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.722 [2024-11-02 11:46:56.085898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.722 [2024-11-02 11:46:56.085929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.722 [2024-11-02 11:46:56.085947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.722 [2024-11-02 11:46:56.086183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.722 [2024-11-02 11:46:56.086435] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.086459] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.086474] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.090022] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.722 [2024-11-02 11:46:56.099458] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.722 [2024-11-02 11:46:56.099853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.722 [2024-11-02 11:46:56.099884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.722 [2024-11-02 11:46:56.099902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.722 [2024-11-02 11:46:56.100139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.722 [2024-11-02 11:46:56.100400] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.100425] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.100439] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.103985] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.722 [2024-11-02 11:46:56.113426] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.722 [2024-11-02 11:46:56.113849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.722 [2024-11-02 11:46:56.113879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.722 [2024-11-02 11:46:56.113897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.722 [2024-11-02 11:46:56.114133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.722 [2024-11-02 11:46:56.114384] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.722 [2024-11-02 11:46:56.114409] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.722 [2024-11-02 11:46:56.114424] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.722 [2024-11-02 11:46:56.118029] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.984 [2024-11-02 11:46:56.127473] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.984 [2024-11-02 11:46:56.127870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.984 [2024-11-02 11:46:56.127902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.984 [2024-11-02 11:46:56.127920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.984 [2024-11-02 11:46:56.128157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.984 [2024-11-02 11:46:56.128408] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.984 [2024-11-02 11:46:56.128432] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.984 [2024-11-02 11:46:56.128448] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.984 [2024-11-02 11:46:56.131995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.984 [2024-11-02 11:46:56.141436] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.984 [2024-11-02 11:46:56.141855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.984 [2024-11-02 11:46:56.141886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.984 [2024-11-02 11:46:56.141904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.984 [2024-11-02 11:46:56.142141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.984 [2024-11-02 11:46:56.142394] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.984 [2024-11-02 11:46:56.142418] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.984 [2024-11-02 11:46:56.142439] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.984 [2024-11-02 11:46:56.145988] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.984 [2024-11-02 11:46:56.155423] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.984 [2024-11-02 11:46:56.155818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.984 [2024-11-02 11:46:56.155849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.984 [2024-11-02 11:46:56.155867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.984 [2024-11-02 11:46:56.156103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.984 [2024-11-02 11:46:56.156356] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.984 [2024-11-02 11:46:56.156380] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.984 [2024-11-02 11:46:56.156395] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.984 [2024-11-02 11:46:56.159939] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.984 [2024-11-02 11:46:56.169393] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.984 [2024-11-02 11:46:56.169769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.984 [2024-11-02 11:46:56.169800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.984 [2024-11-02 11:46:56.169817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.984 [2024-11-02 11:46:56.170054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.984 [2024-11-02 11:46:56.170305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.984 [2024-11-02 11:46:56.170329] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.984 [2024-11-02 11:46:56.170345] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.984 [2024-11-02 11:46:56.173892] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.984 [2024-11-02 11:46:56.183329] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.984 [2024-11-02 11:46:56.183751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.984 [2024-11-02 11:46:56.183782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.984 [2024-11-02 11:46:56.183799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.984 [2024-11-02 11:46:56.184036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.984 [2024-11-02 11:46:56.184290] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.984 [2024-11-02 11:46:56.184314] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.984 [2024-11-02 11:46:56.184329] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.984 [2024-11-02 11:46:56.187873] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.984 [2024-11-02 11:46:56.197299] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.984 [2024-11-02 11:46:56.197689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.984 [2024-11-02 11:46:56.197720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.984 [2024-11-02 11:46:56.197737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.197974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.198215] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.198238] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.198253] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.201814] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.211237] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.211686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.211718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.211735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.211972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.212213] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.212236] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.212251] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.215811] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.225226] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.225661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.225692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.225710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.225946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.226187] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.226211] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.226226] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.229779] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.239193] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.239621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.239657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.239676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.239912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.240152] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.240176] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.240191] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.243747] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.253165] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.253615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.253647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.253665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.253902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.254143] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.254167] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.254181] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.257739] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.267183] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.267614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.267646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.267663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.267900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.268141] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.268164] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.268178] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.271734] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.281157] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.281589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.281621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.281639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.281881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.282123] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.282146] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.282161] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.285712] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.295119] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.295525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.295556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.295574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.295811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.296052] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.296076] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.296091] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.299641] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.309061] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.309468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.309499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.309516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.309752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.309994] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.310017] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.310032] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.313584] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.323030] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.323467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.323498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.323516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.985 [2024-11-02 11:46:56.323753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.985 [2024-11-02 11:46:56.323994] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.985 [2024-11-02 11:46:56.324017] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.985 [2024-11-02 11:46:56.324038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.985 [2024-11-02 11:46:56.327597] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.985 [2024-11-02 11:46:56.337014] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.985 [2024-11-02 11:46:56.337393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.985 [2024-11-02 11:46:56.337424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.985 [2024-11-02 11:46:56.337441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.986 [2024-11-02 11:46:56.337679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.986 [2024-11-02 11:46:56.337920] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.986 [2024-11-02 11:46:56.337943] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.986 [2024-11-02 11:46:56.337957] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.986 [2024-11-02 11:46:56.341513] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.986 [2024-11-02 11:46:56.350928] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.986 [2024-11-02 11:46:56.351349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.986 [2024-11-02 11:46:56.351381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.986 [2024-11-02 11:46:56.351399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.986 [2024-11-02 11:46:56.351635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.986 [2024-11-02 11:46:56.351876] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.986 [2024-11-02 11:46:56.351899] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.986 [2024-11-02 11:46:56.351914] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.986 [2024-11-02 11:46:56.355470] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.986 [2024-11-02 11:46:56.364885] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.986 [2024-11-02 11:46:56.365323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.986 [2024-11-02 11:46:56.365355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.986 [2024-11-02 11:46:56.365372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.986 [2024-11-02 11:46:56.365609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.986 [2024-11-02 11:46:56.365863] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.986 [2024-11-02 11:46:56.365888] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.986 [2024-11-02 11:46:56.365903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.986 [2024-11-02 11:46:56.369461] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:55.986 [2024-11-02 11:46:56.378939] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:55.986 [2024-11-02 11:46:56.379345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.986 [2024-11-02 11:46:56.379378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:55.986 [2024-11-02 11:46:56.379396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:55.986 [2024-11-02 11:46:56.379634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:55.986 [2024-11-02 11:46:56.379876] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:55.986 [2024-11-02 11:46:56.379899] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:55.986 [2024-11-02 11:46:56.379914] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:55.986 [2024-11-02 11:46:56.383565] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.392923] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.393334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.393366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.393385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.393622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.393864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.393887] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.393901] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.397458] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.406876] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.407286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.407318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.407336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.407574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.407815] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.407838] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.407852] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.411408] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.420818] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.421244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.421289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.421308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.421545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.421786] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.421810] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.421824] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.425380] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.434798] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.435221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.435252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.435281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.435519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.435760] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.435783] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.435799] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.439348] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.448760] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.449186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.449217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.449235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.449482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.449724] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.449747] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.449762] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.453315] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.462729] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.463139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.463171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.463190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.463446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.463688] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.463711] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.463726] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.467292] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.476718] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.477150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.477181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.477199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.477448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.477690] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.477713] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.477728] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.481278] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.490704] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.491127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.491159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.491179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.491426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.491668] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.491691] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.491706] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.495251] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.248 [2024-11-02 11:46:56.504670] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.248 [2024-11-02 11:46:56.505093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-02 11:46:56.505123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.248 [2024-11-02 11:46:56.505140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.248 [2024-11-02 11:46:56.505389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.248 [2024-11-02 11:46:56.505630] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.248 [2024-11-02 11:46:56.505654] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.248 [2024-11-02 11:46:56.505676] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.248 [2024-11-02 11:46:56.509220] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.518653] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.519080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.519111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.519129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.519377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.519619] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.519642] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.519657] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.523206] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.532624] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.533025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.533055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.533072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.533320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.533561] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.533584] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.533599] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.537141] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.546553] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.546956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.546987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.547005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.547242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.547495] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.547518] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.547534] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.551077] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.560510] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.560914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.560946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.560963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.561200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.561451] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.561475] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.561490] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.565032] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.574543] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.574952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.574983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.575001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.575237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.575490] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.575514] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.575529] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.579074] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.588496] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.588892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.588923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.588941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.589179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.589431] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.589455] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.589470] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.593015] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.602437] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.602857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.602894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.602912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.603148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.603401] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.603425] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.603440] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.606986] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.616404] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.616829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.616861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.616879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.617116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.617371] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.617395] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.617410] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.620953] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.630377] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.630807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.630838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.630856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.631093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.631348] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.631372] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.631387] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.249 [2024-11-02 11:46:56.634934] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.249 [2024-11-02 11:46:56.644416] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.249 [2024-11-02 11:46:56.644821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-02 11:46:56.644852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.249 [2024-11-02 11:46:56.644871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.249 [2024-11-02 11:46:56.645148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.249 [2024-11-02 11:46:56.645415] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.249 [2024-11-02 11:46:56.645440] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.249 [2024-11-02 11:46:56.645455] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.511 [2024-11-02 11:46:56.649059] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.511 [2024-11-02 11:46:56.658394] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.511 [2024-11-02 11:46:56.658796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.511 [2024-11-02 11:46:56.658828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.511 [2024-11-02 11:46:56.658846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.511 [2024-11-02 11:46:56.659083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.511 [2024-11-02 11:46:56.659337] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.511 [2024-11-02 11:46:56.659361] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.511 [2024-11-02 11:46:56.659376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.511 [2024-11-02 11:46:56.662920] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.511 [2024-11-02 11:46:56.672357] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.511 [2024-11-02 11:46:56.672793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.511 [2024-11-02 11:46:56.672824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.511 [2024-11-02 11:46:56.672842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.511 [2024-11-02 11:46:56.673079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.511 [2024-11-02 11:46:56.673333] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.511 [2024-11-02 11:46:56.673357] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.511 [2024-11-02 11:46:56.673372] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.511 [2024-11-02 11:46:56.676921] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.511 [2024-11-02 11:46:56.686345] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.511 [2024-11-02 11:46:56.686768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.686800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.686818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.687054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.687308] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.687332] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.687354] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.690902] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.700326] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.700723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.700754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.700771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.701008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.701249] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.701311] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.701328] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.704875] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.714302] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.714722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.714754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.714772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.715010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.715252] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.715286] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.715301] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.718847] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.728269] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.728693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.728724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.728742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.728979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.729220] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.729243] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.729267] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.732819] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.742236] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.742668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.742700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.742717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.742954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.743196] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.743220] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.743235] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.746790] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.756206] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.756636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.756667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.756684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.756921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.757162] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.757186] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.757200] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.760757] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.770268] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.770695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.770726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.770744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.770981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.771222] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.771245] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.771272] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.774849] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.784075] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.784501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.784548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.784567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.784804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.785046] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.785069] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.785084] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.788638] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.798056] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.798477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.798508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.798527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.798764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.799006] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.799029] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.799044] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.802597] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.812015] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.812452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.812483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.512 [2024-11-02 11:46:56.812502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.512 [2024-11-02 11:46:56.812739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.512 [2024-11-02 11:46:56.812980] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.512 [2024-11-02 11:46:56.813003] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.512 [2024-11-02 11:46:56.813018] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.512 [2024-11-02 11:46:56.816575] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.512 [2024-11-02 11:46:56.825995] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.512 [2024-11-02 11:46:56.826409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.512 [2024-11-02 11:46:56.826441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.826459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.826702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.826944] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.826967] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.826982] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.513 [2024-11-02 11:46:56.830543] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.513 [2024-11-02 11:46:56.839967] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.513 [2024-11-02 11:46:56.840399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.513 [2024-11-02 11:46:56.840431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.840449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.840686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.840928] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.840950] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.840965] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.513 [2024-11-02 11:46:56.844544] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.513 [2024-11-02 11:46:56.853969] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.513 [2024-11-02 11:46:56.854393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.513 [2024-11-02 11:46:56.854426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.854444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.854681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.854923] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.854946] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.854960] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.513 [2024-11-02 11:46:56.858516] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.513 [2024-11-02 11:46:56.867957] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.513 [2024-11-02 11:46:56.868382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.513 [2024-11-02 11:46:56.868414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.868432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.868678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.868920] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.868949] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.868965] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.513 4441.80 IOPS, 17.35 MiB/s [2024-11-02T10:46:56.915Z] [2024-11-02 11:46:56.874253] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.513 [2024-11-02 11:46:56.881806] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.513 [2024-11-02 11:46:56.882248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.513 [2024-11-02 11:46:56.882288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.882306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.882544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.882785] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.882808] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.882823] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.513 [2024-11-02 11:46:56.886383] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.513 [2024-11-02 11:46:56.895809] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.513 [2024-11-02 11:46:56.896227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.513 [2024-11-02 11:46:56.896265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.896285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.896522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.896763] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.896787] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.896802] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.513 [2024-11-02 11:46:56.900356] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.513 [2024-11-02 11:46:56.909874] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.513 [2024-11-02 11:46:56.910289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.513 [2024-11-02 11:46:56.910322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.513 [2024-11-02 11:46:56.910340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.513 [2024-11-02 11:46:56.910578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.513 [2024-11-02 11:46:56.910820] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.513 [2024-11-02 11:46:56.910842] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.513 [2024-11-02 11:46:56.910857] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.914496] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:56.923792] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:56.924200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.773 [2024-11-02 11:46:56.924232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.773 [2024-11-02 11:46:56.924250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.773 [2024-11-02 11:46:56.924501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.773 [2024-11-02 11:46:56.924743] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.773 [2024-11-02 11:46:56.924766] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.773 [2024-11-02 11:46:56.924780] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.928331] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:56.937746] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:56.938156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.773 [2024-11-02 11:46:56.938188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.773 [2024-11-02 11:46:56.938206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.773 [2024-11-02 11:46:56.938454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.773 [2024-11-02 11:46:56.938696] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.773 [2024-11-02 11:46:56.938720] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.773 [2024-11-02 11:46:56.938734] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.942285] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:56.951700] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:56.952127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.773 [2024-11-02 11:46:56.952157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.773 [2024-11-02 11:46:56.952174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.773 [2024-11-02 11:46:56.952422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.773 [2024-11-02 11:46:56.952664] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.773 [2024-11-02 11:46:56.952687] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.773 [2024-11-02 11:46:56.952703] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.956248] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:56.965672] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:56.966064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.773 [2024-11-02 11:46:56.966101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.773 [2024-11-02 11:46:56.966119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.773 [2024-11-02 11:46:56.966368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.773 [2024-11-02 11:46:56.966610] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.773 [2024-11-02 11:46:56.966633] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.773 [2024-11-02 11:46:56.966648] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.970211] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:56.979651] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:56.980076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.773 [2024-11-02 11:46:56.980107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.773 [2024-11-02 11:46:56.980124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.773 [2024-11-02 11:46:56.980373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.773 [2024-11-02 11:46:56.980615] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.773 [2024-11-02 11:46:56.980638] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.773 [2024-11-02 11:46:56.980653] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.984197] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:56.993616] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:56.994043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.773 [2024-11-02 11:46:56.994074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.773 [2024-11-02 11:46:56.994092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.773 [2024-11-02 11:46:56.994341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.773 [2024-11-02 11:46:56.994582] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.773 [2024-11-02 11:46:56.994605] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.773 [2024-11-02 11:46:56.994620] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.773 [2024-11-02 11:46:56.998162] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.773 [2024-11-02 11:46:57.007581] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.773 [2024-11-02 11:46:57.007981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.008013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.008032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.008287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.008530] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.008554] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.008569] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.012113] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.021535] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.021953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.021984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.022002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.022239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.022490] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.022514] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.022529] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.026072] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.035491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.035910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.035941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.035959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.036195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.036449] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.036472] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.036488] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.040031] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.049449] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.049869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.049900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.049917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.050154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.050407] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.050442] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.050458] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.054004] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.063431] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.063828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.063859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.063877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.064113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.064366] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.064390] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.064405] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.067947] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.077393] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.077820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.077851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.077868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.078105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.078359] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.078383] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.078398] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.081942] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.091327] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.091758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.091789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.091807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.092043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.092296] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.092320] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.092336] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.095890] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.105319] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.105719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.105752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.105770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.106008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.106249] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.106287] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.106310] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.109863] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.119286] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.119684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.119715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.119733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.119970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.120212] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.120234] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.120249] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.123808] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.133219] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.133629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.774 [2024-11-02 11:46:57.133661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.774 [2024-11-02 11:46:57.133679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.774 [2024-11-02 11:46:57.133916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.774 [2024-11-02 11:46:57.134157] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.774 [2024-11-02 11:46:57.134180] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.774 [2024-11-02 11:46:57.134195] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.774 [2024-11-02 11:46:57.137752] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.774 [2024-11-02 11:46:57.147172] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.774 [2024-11-02 11:46:57.147601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.775 [2024-11-02 11:46:57.147638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.775 [2024-11-02 11:46:57.147656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.775 [2024-11-02 11:46:57.147894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.775 [2024-11-02 11:46:57.148135] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.775 [2024-11-02 11:46:57.148157] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.775 [2024-11-02 11:46:57.148173] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.775 [2024-11-02 11:46:57.151728] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:56.775 [2024-11-02 11:46:57.161148] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:56.775 [2024-11-02 11:46:57.161578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.775 [2024-11-02 11:46:57.161609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:56.775 [2024-11-02 11:46:57.161627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:56.775 [2024-11-02 11:46:57.161864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:56.775 [2024-11-02 11:46:57.162106] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:56.775 [2024-11-02 11:46:57.162129] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:56.775 [2024-11-02 11:46:57.162143] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:56.775 [2024-11-02 11:46:57.165704] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.036 [2024-11-02 11:46:57.175129] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.036 [2024-11-02 11:46:57.175540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.036 [2024-11-02 11:46:57.175572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.036 [2024-11-02 11:46:57.175590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.036 [2024-11-02 11:46:57.175828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.036 [2024-11-02 11:46:57.176070] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.036 [2024-11-02 11:46:57.176093] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.036 [2024-11-02 11:46:57.176108] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.036 [2024-11-02 11:46:57.179751] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.036 [2024-11-02 11:46:57.189004] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.036 [2024-11-02 11:46:57.189416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.036 [2024-11-02 11:46:57.189448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.036 [2024-11-02 11:46:57.189465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.036 [2024-11-02 11:46:57.189709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.036 [2024-11-02 11:46:57.189951] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.036 [2024-11-02 11:46:57.189974] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.036 [2024-11-02 11:46:57.189989] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.036 [2024-11-02 11:46:57.193541] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.036 [2024-11-02 11:46:57.202960] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.036 [2024-11-02 11:46:57.203348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.036 [2024-11-02 11:46:57.203380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.036 [2024-11-02 11:46:57.203397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.036 [2024-11-02 11:46:57.203635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.036 [2024-11-02 11:46:57.203877] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.036 [2024-11-02 11:46:57.203900] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.036 [2024-11-02 11:46:57.203915] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.036 [2024-11-02 11:46:57.207467] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.036 [2024-11-02 11:46:57.216911] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.036 [2024-11-02 11:46:57.217319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.036 [2024-11-02 11:46:57.217351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.036 [2024-11-02 11:46:57.217370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.217607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.217849] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.217873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.217888] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.221449] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.230872] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.231311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.231342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.231360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.231596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.231838] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.231867] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.231883] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.235435] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.244862] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.245299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.245331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.245349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.245586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.245828] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.245851] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.245865] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.249421] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.258855] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.259279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.259322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.259340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.259576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.259818] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.259841] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.259855] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.263414] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.272860] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.273329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.273361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.273379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.273616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.273856] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.273879] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.273894] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.277465] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.286679] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.287104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.287152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.287169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.287415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.287657] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.287680] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.287695] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.291245] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.300666] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.301162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.301210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.301228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.301472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.301714] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.301737] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.301752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.305307] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.314520] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.314982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.315030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.315048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.315294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.315536] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.315559] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.315574] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.319116] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.328332] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.328823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.328859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.328877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.329114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.329367] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.329399] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.329413] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.332959] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.342189] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.342646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.342677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.037 [2024-11-02 11:46:57.342694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.037 [2024-11-02 11:46:57.342931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.037 [2024-11-02 11:46:57.343172] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.037 [2024-11-02 11:46:57.343195] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.037 [2024-11-02 11:46:57.343210] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.037 [2024-11-02 11:46:57.346764] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.037 [2024-11-02 11:46:57.356183] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.037 [2024-11-02 11:46:57.356588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.037 [2024-11-02 11:46:57.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.038 [2024-11-02 11:46:57.356636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.038 [2024-11-02 11:46:57.356873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.038 [2024-11-02 11:46:57.357114] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.038 [2024-11-02 11:46:57.357137] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.038 [2024-11-02 11:46:57.357152] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.038 [2024-11-02 11:46:57.360711] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.038 [2024-11-02 11:46:57.370153] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.038 [2024-11-02 11:46:57.370558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.038 [2024-11-02 11:46:57.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.038 [2024-11-02 11:46:57.370607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.038 [2024-11-02 11:46:57.370850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.038 [2024-11-02 11:46:57.371091] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.038 [2024-11-02 11:46:57.371114] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.038 [2024-11-02 11:46:57.371128] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.038 [2024-11-02 11:46:57.374696] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.038 [2024-11-02 11:46:57.384122] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.038 [2024-11-02 11:46:57.384550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.038 [2024-11-02 11:46:57.384582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.038 [2024-11-02 11:46:57.384599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.038 [2024-11-02 11:46:57.384836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.038 [2024-11-02 11:46:57.385078] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.038 [2024-11-02 11:46:57.385101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.038 [2024-11-02 11:46:57.385115] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.038 [2024-11-02 11:46:57.388671] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.038 [2024-11-02 11:46:57.398097] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.038 [2024-11-02 11:46:57.398542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.038 [2024-11-02 11:46:57.398574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.038 [2024-11-02 11:46:57.398591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.038 [2024-11-02 11:46:57.398829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.038 [2024-11-02 11:46:57.399070] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.038 [2024-11-02 11:46:57.399094] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.038 [2024-11-02 11:46:57.399108] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.038 [2024-11-02 11:46:57.402665] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.038 [2024-11-02 11:46:57.412087] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.038 [2024-11-02 11:46:57.412471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.038 [2024-11-02 11:46:57.412502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.038 [2024-11-02 11:46:57.412520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.038 [2024-11-02 11:46:57.412757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.038 [2024-11-02 11:46:57.412999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.038 [2024-11-02 11:46:57.413027] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.038 [2024-11-02 11:46:57.413043] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.038 [2024-11-02 11:46:57.416600] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.038 [2024-11-02 11:46:57.426034] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.038 [2024-11-02 11:46:57.426469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.038 [2024-11-02 11:46:57.426500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.038 [2024-11-02 11:46:57.426518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.038 [2024-11-02 11:46:57.426755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.038 [2024-11-02 11:46:57.426996] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.038 [2024-11-02 11:46:57.427018] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.038 [2024-11-02 11:46:57.427033] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.038 [2024-11-02 11:46:57.430595] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.298 [2024-11-02 11:46:57.440027] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.298 [2024-11-02 11:46:57.440462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.298 [2024-11-02 11:46:57.440495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.298 [2024-11-02 11:46:57.440513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.298 [2024-11-02 11:46:57.440751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.298 [2024-11-02 11:46:57.440992] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.298 [2024-11-02 11:46:57.441015] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.298 [2024-11-02 11:46:57.441030] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.298 [2024-11-02 11:46:57.444685] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.298 [2024-11-02 11:46:57.453903] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.298 [2024-11-02 11:46:57.454331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.298 [2024-11-02 11:46:57.454363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.298 [2024-11-02 11:46:57.454381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.298 [2024-11-02 11:46:57.454618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.298 [2024-11-02 11:46:57.454860] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.298 [2024-11-02 11:46:57.454883] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.298 [2024-11-02 11:46:57.454897] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.298 [2024-11-02 11:46:57.458466] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.298 [2024-11-02 11:46:57.467891] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.298 [2024-11-02 11:46:57.468292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.298 [2024-11-02 11:46:57.468323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.298 [2024-11-02 11:46:57.468341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.298 [2024-11-02 11:46:57.468578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.298 [2024-11-02 11:46:57.468819] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.298 [2024-11-02 11:46:57.468842] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.468857] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.472430] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.481858] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.482286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.482318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.482335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.482572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.482813] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.482836] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.482851] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.486411] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.495834] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.496269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.496301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.496319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.496556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.496797] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.496820] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.496835] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.500397] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.509834] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.510241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.510283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.510301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.510539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.510785] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.510808] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.510822] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.514386] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3977581 Killed "${NVMF_APP[@]}" "$@" 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3978536 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3978536 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3978536 ']' 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:57.299 [2024-11-02 11:46:57.523830] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:57.299 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.299 [2024-11-02 11:46:57.524250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.524290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.524308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.524545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.524786] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.524810] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.524826] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.528388] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.537819] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.538245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.538286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.538304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.538541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.538782] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.538806] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.538821] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.542381] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.551215] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.551592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.551621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.551636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.551864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.552120] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.552141] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.552155] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.555567] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.564666] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.565105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.565148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.565164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.565404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.565651] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.565670] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.565683] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.568706] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.572985] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:34:57.299 [2024-11-02 11:46:57.573042] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.299 [2024-11-02 11:46:57.578053] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.299 [2024-11-02 11:46:57.578436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.299 [2024-11-02 11:46:57.578465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.299 [2024-11-02 11:46:57.578481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.299 [2024-11-02 11:46:57.578707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.299 [2024-11-02 11:46:57.578921] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.299 [2024-11-02 11:46:57.578940] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.299 [2024-11-02 11:46:57.578952] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.299 [2024-11-02 11:46:57.582010] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.299 [2024-11-02 11:46:57.591213] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.591604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.591646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.591661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.591893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.592091] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.592110] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.592122] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.595123] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.604494] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.604975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.605017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.605034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.605304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.605515] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.605536] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.605549] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.608535] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.618388] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.618833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.618880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.618897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.619141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.619399] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.619421] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.619434] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.622972] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.632237] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.632693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.632725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.632743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.632981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.633223] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.633245] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.633271] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.636740] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.645968] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.646390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.646422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.646440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.646677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.646919] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.646943] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.646957] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.650442] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.653349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:57.300 [2024-11-02 11:46:57.659716] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.660190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.660220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.660237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.660509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.660764] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.660789] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.660805] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.664342] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.673681] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.674221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.674278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.674317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.674563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.674823] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.674847] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.674864] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.678380] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.300 [2024-11-02 11:46:57.687619] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.300 [2024-11-02 11:46:57.688091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.300 [2024-11-02 11:46:57.688133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.300 [2024-11-02 11:46:57.688151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.300 [2024-11-02 11:46:57.688410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.300 [2024-11-02 11:46:57.688653] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.300 [2024-11-02 11:46:57.688677] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.300 [2024-11-02 11:46:57.688692] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.300 [2024-11-02 11:46:57.692174] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.560 [2024-11-02 11:46:57.701562] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.560 [2024-11-02 11:46:57.701994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.560 [2024-11-02 11:46:57.702025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.560 [2024-11-02 11:46:57.702041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.560 [2024-11-02 11:46:57.702267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.560 [2024-11-02 11:46:57.702486] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.560 [2024-11-02 11:46:57.702515] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.560 [2024-11-02 11:46:57.702530] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.560 [2024-11-02 11:46:57.702909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:57.560 [2024-11-02 11:46:57.702946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:57.560 [2024-11-02 11:46:57.702963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:57.560 [2024-11-02 11:46:57.702976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:57.560 [2024-11-02 11:46:57.702987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:57.560 [2024-11-02 11:46:57.704480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.560 [2024-11-02 11:46:57.704537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:57.560 [2024-11-02 11:46:57.704541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.560 [2024-11-02 11:46:57.705965] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.560 [2024-11-02 11:46:57.715030] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.560 [2024-11-02 11:46:57.715607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.560 [2024-11-02 11:46:57.715646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.560 [2024-11-02 11:46:57.715665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.560 [2024-11-02 11:46:57.715903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.560 [2024-11-02 11:46:57.716118] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.560 [2024-11-02 11:46:57.716139] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.615 [2024-11-02 11:46:57.716155] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.615 [2024-11-02 11:46:57.719336] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.615 [2024-11-02 11:46:57.728646] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.615 [2024-11-02 11:46:57.729221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.615 [2024-11-02 11:46:57.729265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.615 [2024-11-02 11:46:57.729286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.615 [2024-11-02 11:46:57.729508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.615 [2024-11-02 11:46:57.729741] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.615 [2024-11-02 11:46:57.729763] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.615 [2024-11-02 11:46:57.729778] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.615 [2024-11-02 11:46:57.732925] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.615 [2024-11-02 11:46:57.742120] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.615 [2024-11-02 11:46:57.742689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.615 [2024-11-02 11:46:57.742763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.615 [2024-11-02 11:46:57.742785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.615 [2024-11-02 11:46:57.743017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.615 [2024-11-02 11:46:57.743232] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.615 [2024-11-02 11:46:57.743253] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.615 [2024-11-02 11:46:57.743297] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.615 [2024-11-02 11:46:57.746455] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.615 [2024-11-02 11:46:57.755752] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.615 [2024-11-02 11:46:57.756303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.615 [2024-11-02 11:46:57.756340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.615 [2024-11-02 11:46:57.756360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.615 [2024-11-02 11:46:57.756596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.756810] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.756831] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.756847] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.759999] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.769300] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.769839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.769877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.769897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.770133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.770379] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.770402] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.770418] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.773617] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.782896] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.783449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.783485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.783505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.783754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.783969] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.783990] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.784005] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.787150] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.796441] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.796845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.796874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.796891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.797120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.797361] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.797383] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.797397] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.800544] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.809980] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.810359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.810387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.810403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.810616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.810834] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.810855] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.810869] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.814146] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.823460] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.823857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.823887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.823904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.824134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.824377] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.824404] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.824419] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.827579] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.837002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.837372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.837400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.837417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.837631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.837849] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.837870] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.837884] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.841002] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.850341] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.850751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.850779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.850795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.851008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.851226] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.851247] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.851270] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.616 [2024-11-02 11:46:57.854455] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 [2024-11-02 11:46:57.863915] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.864321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.864349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.864365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.864579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.864815] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.864836] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.616 [2024-11-02 11:46:57.864850] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.616 [2024-11-02 11:46:57.868041] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.616 3701.50 IOPS, 14.46 MiB/s [2024-11-02T10:46:58.018Z] 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.616 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.616 [2024-11-02 11:46:57.878937] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.616 [2024-11-02 11:46:57.879313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.616 [2024-11-02 11:46:57.879341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.616 [2024-11-02 11:46:57.879357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.616 [2024-11-02 11:46:57.879587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.616 [2024-11-02 11:46:57.879798] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.616 [2024-11-02 11:46:57.879818] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.617 [2024-11-02 11:46:57.879832] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.617 [2024-11-02 11:46:57.880243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.617 [2024-11-02 11:46:57.883051] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.617 [2024-11-02 11:46:57.892434] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.617 [2024-11-02 11:46:57.892837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.617 [2024-11-02 11:46:57.892866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.617 [2024-11-02 11:46:57.892883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.617 [2024-11-02 11:46:57.893114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.617 [2024-11-02 11:46:57.893356] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.617 [2024-11-02 11:46:57.893378] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.617 [2024-11-02 11:46:57.893392] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.617 [2024-11-02 11:46:57.896656] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.617 [2024-11-02 11:46:57.905838] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.617 [2024-11-02 11:46:57.906269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.617 [2024-11-02 11:46:57.906309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.617 [2024-11-02 11:46:57.906325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.617 [2024-11-02 11:46:57.906554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.617 [2024-11-02 11:46:57.906759] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.617 [2024-11-02 11:46:57.906779] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.617 [2024-11-02 11:46:57.906792] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.617 [2024-11-02 11:46:57.909932] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.617 [2024-11-02 11:46:57.919404] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.617 [2024-11-02 11:46:57.919949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.617 [2024-11-02 11:46:57.919983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.617 [2024-11-02 11:46:57.920002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.617 [2024-11-02 11:46:57.920238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.617 [2024-11-02 11:46:57.920482] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.617 [2024-11-02 11:46:57.920505] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.617 [2024-11-02 11:46:57.920521] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.617 Malloc0 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.617 [2024-11-02 11:46:57.923858] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.617 [2024-11-02 11:46:57.933027] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.617 [2024-11-02 11:46:57.933432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.617 [2024-11-02 11:46:57.933461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220a970 with addr=10.0.0.2, port=4420 00:34:57.617 [2024-11-02 11:46:57.933477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220a970 is same with the state(6) to be set 00:34:57.617 [2024-11-02 11:46:57.933705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220a970 (9): Bad file descriptor 00:34:57.617 [2024-11-02 11:46:57.933916] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:57.617 [2024-11-02 11:46:57.933944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:57.617 [2024-11-02 11:46:57.933958] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:57.617 [2024-11-02 11:46:57.937178] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:57.617 [2024-11-02 11:46:57.941640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.617 11:46:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3977784 00:34:57.617 [2024-11-02 11:46:57.946596] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:57.876 [2024-11-02 11:46:58.020019] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:34:59.765 4246.86 IOPS, 16.59 MiB/s [2024-11-02T10:47:01.101Z] 4800.00 IOPS, 18.75 MiB/s [2024-11-02T10:47:02.038Z] 5217.78 IOPS, 20.38 MiB/s [2024-11-02T10:47:02.973Z] 5554.20 IOPS, 21.70 MiB/s [2024-11-02T10:47:03.910Z] 5838.36 IOPS, 22.81 MiB/s [2024-11-02T10:47:05.287Z] 6080.00 IOPS, 23.75 MiB/s [2024-11-02T10:47:06.225Z] 6285.00 IOPS, 24.55 MiB/s [2024-11-02T10:47:07.162Z] 6454.21 IOPS, 25.21 MiB/s [2024-11-02T10:47:07.162Z] 6610.07 IOPS, 25.82 MiB/s 00:35:06.760 Latency(us) 00:35:06.760 [2024-11-02T10:47:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.760 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:06.760 Verification LBA range: start 0x0 length 0x4000 00:35:06.760 Nvme1n1 : 15.01 6608.92 25.82 8682.98 0.00 8345.01 922.36 16408.27 00:35:06.760 [2024-11-02T10:47:07.162Z] =================================================================================================================== 00:35:06.760 [2024-11-02T10:47:07.162Z] Total : 6608.92 25.82 8682.98 0.00 8345.01 922.36 16408.27 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:06.760 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.760 rmmod nvme_tcp 00:35:06.760 rmmod nvme_fabrics 00:35:06.760 rmmod nvme_keyring 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3978536 ']' 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3978536 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3978536 ']' 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3978536 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:06.761 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3978536 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3978536' 00:35:07.018 killing process with pid 3978536 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3978536 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3978536 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.018 11:47:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.554 00:35:09.554 real 0m22.121s 00:35:09.554 user 0m58.592s 00:35:09.554 sys 0m4.389s 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:09.554 ************************************ 00:35:09.554 END TEST nvmf_bdevperf 00:35:09.554 ************************************ 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.554 ************************************ 00:35:09.554 START TEST nvmf_target_disconnect 00:35:09.554 ************************************ 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:09.554 * Looking for test storage... 00:35:09.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:09.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.554 --rc genhtml_branch_coverage=1 00:35:09.554 --rc genhtml_function_coverage=1 00:35:09.554 --rc genhtml_legend=1 00:35:09.554 --rc geninfo_all_blocks=1 00:35:09.554 --rc geninfo_unexecuted_blocks=1 00:35:09.554 00:35:09.554 ' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:09.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.554 --rc genhtml_branch_coverage=1 00:35:09.554 --rc genhtml_function_coverage=1 00:35:09.554 --rc genhtml_legend=1 00:35:09.554 --rc geninfo_all_blocks=1 00:35:09.554 --rc geninfo_unexecuted_blocks=1 00:35:09.554 00:35:09.554 ' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:09.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.554 --rc genhtml_branch_coverage=1 00:35:09.554 --rc genhtml_function_coverage=1 00:35:09.554 --rc genhtml_legend=1 00:35:09.554 --rc geninfo_all_blocks=1 00:35:09.554 --rc geninfo_unexecuted_blocks=1 00:35:09.554 00:35:09.554 ' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:09.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.554 --rc genhtml_branch_coverage=1 00:35:09.554 --rc genhtml_function_coverage=1 00:35:09.554 --rc genhtml_legend=1 00:35:09.554 --rc geninfo_all_blocks=1 00:35:09.554 --rc geninfo_unexecuted_blocks=1 00:35:09.554 00:35:09.554 ' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.554 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:09.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:09.555 11:47:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:11.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.457 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:11.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:11.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:11.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:35:11.458 00:35:11.458 --- 10.0.0.2 ping statistics --- 00:35:11.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.458 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:35:11.458 00:35:11.458 --- 10.0.0.1 ping statistics --- 00:35:11.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.458 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:11.458 ************************************ 00:35:11.458 START TEST nvmf_target_disconnect_tc1 00:35:11.458 ************************************ 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:11.458 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:11.717 [2024-11-02 11:47:11.943860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-02 11:47:11.943925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abe610 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-02 11:47:11.943959] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:11.717 [2024-11-02 11:47:11.943978] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:11.717 [2024-11-02 11:47:11.943992] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:11.717 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:11.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:11.717 Initializing NVMe Controllers 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:11.717 00:35:11.717 real 0m0.108s 00:35:11.717 user 0m0.050s 00:35:11.717 sys 0m0.057s 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:11.717 ************************************ 00:35:11.717 END TEST nvmf_target_disconnect_tc1 00:35:11.717 ************************************ 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:11.717 11:47:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:11.717 ************************************ 00:35:11.717 START TEST nvmf_target_disconnect_tc2 00:35:11.717 ************************************ 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3981589 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3981589 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3981589 ']' 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:11.717 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.717 [2024-11-02 11:47:12.063867] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:35:11.717 [2024-11-02 11:47:12.063972] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.977 [2024-11-02 11:47:12.139519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:11.977 [2024-11-02 11:47:12.186123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.977 [2024-11-02 11:47:12.186180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.977 [2024-11-02 11:47:12.186201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.977 [2024-11-02 11:47:12.186211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.977 [2024-11-02 11:47:12.186220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.977 [2024-11-02 11:47:12.187728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:11.977 [2024-11-02 11:47:12.187791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:11.977 [2024-11-02 11:47:12.187855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:11.977 [2024-11-02 11:47:12.187858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.977 Malloc0 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.977 [2024-11-02 11:47:12.369094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.977 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:12.237 [2024-11-02 11:47:12.397405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3981713 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:12.237 11:47:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:14.147 11:47:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3981589 00:35:14.147 11:47:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Write completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Write completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Write completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Write completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Write completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Read completed with error (sct=0, sc=8) 00:35:14.147 starting I/O failed 00:35:14.147 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 [2024-11-02 11:47:14.422844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 [2024-11-02 11:47:14.423207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 [2024-11-02 11:47:14.423553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Write completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 Read completed with error (sct=0, sc=8) 00:35:14.148 starting I/O failed 00:35:14.148 [2024-11-02 11:47:14.423881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:14.148 [2024-11-02 11:47:14.424090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.148 [2024-11-02 11:47:14.424134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.148 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.424326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.424355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.424501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.424529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.424665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.424694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.424839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.424866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.425037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.425063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.425231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.425272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.425407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.425433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.425589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.425616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.425799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.425825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.425969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.426030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.426274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.426321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.426446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.426473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.426619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.426646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.426960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.427015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.427199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.427228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.427389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.427418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.427592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.427621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.427746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.427776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.427953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.427984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.428175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.428202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.428408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.428435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.428555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.428583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.428829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.428858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.429007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.429035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.429212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.429240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.429392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.429440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.429623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.429670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.429956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.430008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.430207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.430234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.430377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.430405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.430577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.430604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.430801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.430843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.431030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.431057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.431215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.431241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.431419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.431445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.431572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.431599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.431880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.431933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.432092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.149 [2024-11-02 11:47:14.432118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.149 qpair failed and we were unable to recover it. 00:35:14.149 [2024-11-02 11:47:14.432339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.432385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.432510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.432563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.432754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.432779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.432966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.432992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.433164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.433190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.433366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.433394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.433540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.433573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.433702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.433728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.433875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.433901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.434037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.434063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.434302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.434332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.434534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.434567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.434724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.434752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.434968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.434995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.435118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.435146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.435351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.435400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.435589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.435621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.435767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.435810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.436019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.436075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.436243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.436281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.436434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.436468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.436812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.436863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.437113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.437167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.437343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.437370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.437523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.437555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.437708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.437735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.437883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.437927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.438086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.438117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.438292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.438320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.438443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.438470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.438688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.438753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.438905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.438932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.439052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.439080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.439202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.439230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.439409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.439437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.439561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.439589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.439815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.439872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.440093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.150 [2024-11-02 11:47:14.440141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.150 qpair failed and we were unable to recover it. 00:35:14.150 [2024-11-02 11:47:14.440309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.440337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.440454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.440481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.440676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.440722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.440915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.440975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.441125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.441152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.441336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.441364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.441481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.441508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.441655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.441697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.441846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.441872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.442031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.442058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.442178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.442204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.442361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.442389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.442523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.442566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.442701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.442745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.442917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.442944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.443090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.443118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.443250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.443315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.443511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.443543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.443742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.443770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.443958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.443987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.444148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.444357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.444385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.444507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.444539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.444695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.444722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.444832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.444859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.445011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.445039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.445202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.445275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.445458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.445636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.445680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.445900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.445930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.446105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.446291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.446320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.446495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.446522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.446709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.446738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.446927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.446956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.447146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.447173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.447310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.447337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.447486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.447513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.151 qpair failed and we were unable to recover it. 00:35:14.151 [2024-11-02 11:47:14.447661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.151 [2024-11-02 11:47:14.447691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.447882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.447910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.448044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.448071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.448216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.448243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.448412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.448451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.448653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.448711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.449014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.449042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.449199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.449228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.449389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.449416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.449597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.449624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.449776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.449803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.449983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.450011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.450166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.450193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.450360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.450400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.450606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.450650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.450974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.451026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.451223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.451266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.451418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.451446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.451597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.451624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.451799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.451841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.452115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.452178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.452356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.452383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.452534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.452561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.452753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.452782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.452959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.453017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.453189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.453216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.453391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.453418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.453528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.453553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.453705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.453731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.453987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.454040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.454225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.454272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.454431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.454460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.152 [2024-11-02 11:47:14.454688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.152 [2024-11-02 11:47:14.454739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.152 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.454943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.454969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.455145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.455171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.455320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.455348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.455496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.455522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.455664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.455690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.455846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.455873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.456043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.456070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.456230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.456276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.456437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.456465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.456618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.456645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.456797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.456825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.456976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.457020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.457164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.457208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.457395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.457424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.457596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.457640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.457891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.457918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.458092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.458118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.458272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.458298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.458470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.458501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.458631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.458658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.458835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.458861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.459015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.459059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.459244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.459281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.459402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.459430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.459600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.459631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.459831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.460030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.460057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.460183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.460212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.460360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.460565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.460597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.460859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.460886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.461001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.461027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.461149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.461176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.461323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.461507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.461534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.461683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.461710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.461864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.461907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.462076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.153 [2024-11-02 11:47:14.462102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.153 qpair failed and we were unable to recover it. 00:35:14.153 [2024-11-02 11:47:14.462262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.462302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.462470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.462510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.462709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.462741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.463009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.463062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.463228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.463271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.463419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.463447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.463608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.463635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.463917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.463989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.464253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.464288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.464419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.464447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.464606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.464636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.464786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.464816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.465079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.465159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.465351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.465534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.465561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.465686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.465713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.465934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.465982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.466102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.466131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.466271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.466316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.466442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.466468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.466652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.466710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.466911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.466940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.467130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.467160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.467358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.467386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.467504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.467531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.467660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.467685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.467980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.468032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.468188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.468216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.468434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.468462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.468588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.468616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.468791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.468817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.469024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.469051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.469226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.469276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.469406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.469433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.469581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.469610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.469831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.469861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.470034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.470061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.470186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.470212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.154 qpair failed and we were unable to recover it. 00:35:14.154 [2024-11-02 11:47:14.470373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.154 [2024-11-02 11:47:14.470401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.470550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.470578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.470727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.470753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.470994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.471022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.471199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.471243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.471431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.471460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.471648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.471675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.471830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.472126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.472153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.472312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.472345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.472467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.472495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.472667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.472693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.472839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.472869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.473079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.473107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.473297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.473338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.473470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.473499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.473631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.473662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.473893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.473920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.474090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.474131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.474271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.474312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.474465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.474493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.474679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.474722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.474886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.474955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.475107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.475137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.475348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.475376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.475499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.475526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.475713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.475740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.475908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.475935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.476082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.476108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.476230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.476261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.476413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.476440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.476608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.476648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.476918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.476972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.477140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.477185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.477388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.477416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.477627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.477671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.477882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.477916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.478043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.478069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.155 qpair failed and we were unable to recover it. 00:35:14.155 [2024-11-02 11:47:14.478221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.155 [2024-11-02 11:47:14.478247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.478428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.478455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.478687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.478718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.478942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.479013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.479188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.479215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.479377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.479404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.479522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.479564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.479701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.479732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.479862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.479893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.480083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.480112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.480251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.480289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.480420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.480447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.480630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.480657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.480794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.480823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.480989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.481019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.481155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.481182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.481314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.481341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.481497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.481525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.481677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.481704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.481921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.481948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.482115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.482144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.482309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.482336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.482484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.482511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.482697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.482723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.482898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.482925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.483074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.483123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.483296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.483340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.483463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.483506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.483694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.483723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.483945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.483974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.484133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.484162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.484315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.484342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.484493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.484519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.484720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.484777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.485009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.485035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.156 [2024-11-02 11:47:14.485177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.156 [2024-11-02 11:47:14.485203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.156 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.485351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.485392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.485549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.485578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.485690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.485716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.485897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.485924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.486079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.486106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.486306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.486337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.486513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.486557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.486799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.486853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.486999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.487025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.487174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.487202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.487369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.487397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.487570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.487597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.487750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.487776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.487984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.488036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.488198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.488225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.488384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.488412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.488581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.488622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.488776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.488822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.489081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.489135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.489290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.489319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.489493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.489520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.489706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.489746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.489928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.489956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.490132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.490159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.490282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.490309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.490487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.490514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.490777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.490821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.491003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.491031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.491178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.491340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.491367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.491523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.491574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.491844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.491891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.492210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.492278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.492441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.492467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.492661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.492921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.492948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.157 [2024-11-02 11:47:14.493097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.157 [2024-11-02 11:47:14.493123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.157 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.493340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.493377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.493551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.493588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.493766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.493793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.493915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.493941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.494190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.494216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.494391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.494431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.494596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.494625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.494896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.494946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.495182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.495234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.495406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.495434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.495579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.495622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.495882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.495933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.496064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.496095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.496286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.496314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.496465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.496492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.496642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.496669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.496932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.496959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.497082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.497109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.497250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.497284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.497436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.497468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.497641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.497667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.497818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.497845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.498008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.498049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.498203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.498231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.498413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.498442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.498590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.498634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.498799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.498843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.499065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.499092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.499236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.499280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.499419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.499445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.499569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.499596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.499723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.499749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.499897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.499924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.500113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.500140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.500272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.500300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.500448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.500475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.500621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.500649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.500797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.158 [2024-11-02 11:47:14.500825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.158 qpair failed and we were unable to recover it. 00:35:14.158 [2024-11-02 11:47:14.500974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.501002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.501154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.501181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.501354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.501381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.501546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.501576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.501735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.501779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.501953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.501981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.502154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.502180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.502346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.502390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.502606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.502632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.502795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.502821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.502990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.503018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.503166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.503193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.503386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.503413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.503573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.503599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.503770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.503797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.503915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.503941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.504081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.504107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.504286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.504314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.504483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.504510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.504736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.504763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.504881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.504907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.505026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.505058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.505237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.505270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.505417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.505469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.505663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.505707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.505881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.505908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.506028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.506056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.506174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.506201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.506400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.506445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.506619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.506665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.506857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.506884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.507027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.507055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.507231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.507263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.507477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.507503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.507680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.507707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.507834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.507861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.507985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.508013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.508137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.159 [2024-11-02 11:47:14.508163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.159 qpair failed and we were unable to recover it. 00:35:14.159 [2024-11-02 11:47:14.508355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.508403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.508607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.508646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.508814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.508849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.509021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.509048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.509169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.509197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.509347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.509376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.509554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.509593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.509753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.509782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.509944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.509973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.510154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.510181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.510372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.510400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.510566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.510595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.510757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.510786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.510944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.510974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.511145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.511174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.511323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.511349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.511536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.511562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.511781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.511825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.511999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.512026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.512188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.512216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.512376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.512405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.512573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.512603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.512766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.512795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.512941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.512973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.513124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.513150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.513298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.513325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.513498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.513524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.513711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.513740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.514101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.514151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.514332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.514361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.514483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.514510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.514703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.514730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.514903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.514946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.515094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.515121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.515293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.515320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.515451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.515479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.515698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.515725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.515857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.515900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.160 [2024-11-02 11:47:14.516054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.160 [2024-11-02 11:47:14.516083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.160 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.516248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.516281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.516436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.516463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.516623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.516652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.516823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.516850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.517000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.517026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.517142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.517167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.517367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.517394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.517513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.517539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.517666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.517692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.517870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.517914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.518082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.518109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.518286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.518318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.518467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.518494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.518695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.518761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.518903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.518932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.519120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.519149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.519306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.519333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.519469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.519495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.519640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.519683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.519893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.519964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.520139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.520306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.520333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.520484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.520511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.520658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.520685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.520887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.520916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.521144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.521174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.521348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.521375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.521520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.521564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.521796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.521849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.522044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.522070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.522221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.522272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.522426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.522466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.522657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.522685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.161 qpair failed and we were unable to recover it. 00:35:14.161 [2024-11-02 11:47:14.522885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.161 [2024-11-02 11:47:14.522931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.523079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.523106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.523254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.523297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.523443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.523487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.523654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.523698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.523931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.523984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.524127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.524154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.524318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.524350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.524516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.524546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.524702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.524731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.524889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.524918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.525088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.525114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.525268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.525295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.525443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.525470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.525640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.525683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.525853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.526056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.526085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.526236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.526292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.526443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.526471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.526587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.526614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.526832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.526859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.526979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.527007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.527167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.527196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.527351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.527379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.527554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.527583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.527877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.527940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.528137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.528166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.528340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.528368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.528546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.528573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.528748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.528774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.528899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.528926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.529110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.529136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.529268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.529299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.529469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.529496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.529676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.529733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.529869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.529955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.530127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.530156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.530367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.530394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.530527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.162 [2024-11-02 11:47:14.530554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.162 qpair failed and we were unable to recover it. 00:35:14.162 [2024-11-02 11:47:14.530676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.530702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.530873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.530898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.531090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.531119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.531278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.531322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.531472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.531498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.531701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.531760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.531922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.531951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.532117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.532146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.532312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.532339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.532493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.532520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.532666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.532692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.532943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.532969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.533122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.533148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.533300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.533327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.533501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.533528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.533657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.533683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.533886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.533914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.534099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.534128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.534297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.534324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.534473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.534500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.534713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.534744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.534899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.534942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.535109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.535154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.535323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.535351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.535513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.535540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.535691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.535718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.535865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.535896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.536075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.536105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.536269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.536323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.536477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.536504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.536634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.536664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.536812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.536843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.537077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.537253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.537286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.537419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.537459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.537611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.537640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.537776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.537805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.537980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.538010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.538187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.163 [2024-11-02 11:47:14.538225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.163 qpair failed and we were unable to recover it. 00:35:14.163 [2024-11-02 11:47:14.538405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.538434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.538570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.538606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.538761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.538787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.538962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.538991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.539176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.539205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.539417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.539457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.539612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.539645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.539765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.539795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.164 [2024-11-02 11:47:14.539942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.164 [2024-11-02 11:47:14.539985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.164 qpair failed and we were unable to recover it. 00:35:14.444 [2024-11-02 11:47:14.540128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.444 [2024-11-02 11:47:14.540163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.444 qpair failed and we were unable to recover it. 00:35:14.444 [2024-11-02 11:47:14.540344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.444 [2024-11-02 11:47:14.540372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.444 qpair failed and we were unable to recover it. 00:35:14.444 [2024-11-02 11:47:14.540523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.540552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.540707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.540736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.540891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.540921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.541047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.541074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.541234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.541268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.541429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.541455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.541645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.541678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.541832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.541861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.542058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.542084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.542280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.542314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.542471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.542498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.542662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.542752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.542889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.542915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.543094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.543120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.543266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.543293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.543456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.543482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.543677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.543706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.543962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.544015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.544216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.544254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.544385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.544412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.544577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.544617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.544783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.544812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.544989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.545020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.545215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.545242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.545390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.545422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.545578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.545604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.545713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.545740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.545888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.545915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.546090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.546120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.546268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.546313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.546495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.546522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.546679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.546706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.546850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.546877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.547048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.547075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.547269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.547316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.547464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.547492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.547643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.547670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.547855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.445 [2024-11-02 11:47:14.547882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.445 qpair failed and we were unable to recover it. 00:35:14.445 [2024-11-02 11:47:14.548042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.548070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.548194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.548221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.548382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.548409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.548557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.548584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.548746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.548772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.548945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.548989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.549141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.549169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.549324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.549351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.549501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.549529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.549705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.549731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.549860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.549887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.550059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.550086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.550211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.550238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.550448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.550480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.550633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.550660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.550777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.550803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.550954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.550980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.551123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.551150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.551270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.551299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.551449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.551477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.551671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.551700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.551873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.551900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.552076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.552103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.552230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.552268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.552411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.552440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.552614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.552641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.552819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.552845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.552973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.553001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.553154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.553181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.553309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.553336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.553463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.553489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.553641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.553667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.553810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.553837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.553991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.554017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.554206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.554232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.554406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.554432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.554556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.554583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.554725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.554751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.446 [2024-11-02 11:47:14.554908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.446 [2024-11-02 11:47:14.554935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.446 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.555046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.555072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.555280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.555328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.555474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.555501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.555678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.555704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.555879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.555905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.556083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.556110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.556261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.556288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.556435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.556462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.556638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.556664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.556810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.556839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.557015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.557041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.557191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.557217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.557377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.557418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.557536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.557564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.557719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.557765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.558008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.558060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.558205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.558231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.558416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.558444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.558600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.558650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.558851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.558877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.559026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.559052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.559204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.559400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.559428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.559576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.559603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.559751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.559780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.559950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.559976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.560131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.560158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.560346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.560372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.560544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.560578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.560727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.560754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.560931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.560957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.561109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.561135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.561355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.561382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.561500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.561526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.561678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.561705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.561864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.561892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.562065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.447 [2024-11-02 11:47:14.562231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.447 [2024-11-02 11:47:14.562268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.447 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.562383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.562409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.562576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.562617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.562791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.562817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.562970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.562997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.563216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.563242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.563395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.563422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.563550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.563577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.563699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.563725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.563874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.563901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.564020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.564046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.564269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.564489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.564517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.564668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.564696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.564871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.564898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.565027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.565054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.565221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.565417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.565444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.565595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.565627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.565751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.565777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.565955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.565982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.566129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.566155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.566304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.566331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.566505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.566557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.566768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.566901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.566927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.567095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.567125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.567240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.567272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.567421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.567448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.567636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.567871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.567906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.568059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.568102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.568250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.568288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.568480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.568506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.568679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.568714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.568856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.568883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.569030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.569056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.569177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.569203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.569359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.448 [2024-11-02 11:47:14.569399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.448 qpair failed and we were unable to recover it. 00:35:14.448 [2024-11-02 11:47:14.569556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.569585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.569739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.569766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.569950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.569981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.570178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.570204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.570359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.570386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.570541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.570568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.570716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.570751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.570903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.570929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.571108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.571139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.571334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.571361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.571475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.571503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.571656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.571683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.571895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.571921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.572068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.572095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.572265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.572306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.572490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.572518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.572712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.572739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.572886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.572913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.573095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.573121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.573273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.573301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.573436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.573464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.573617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.573644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.573784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.573983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.574012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.574196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.574222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.574390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.574417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.574550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.574579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.574723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.574751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.574902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.574946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.575138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.575164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.575309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.575336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.575475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.575501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.449 qpair failed and we were unable to recover it. 00:35:14.449 [2024-11-02 11:47:14.575698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.449 [2024-11-02 11:47:14.575724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.575872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.575898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.576063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.576089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.576206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.576235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.576422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.576449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.576617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.576646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.576835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.576862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.577006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.577032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.577217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.577261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.577392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.577420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.577545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.577572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.577716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.577760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.577972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.577998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.578146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.578172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.578302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.578329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.578519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.578556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.578708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.578735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.578884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.578910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.579084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.579113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.579297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.579325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.579440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.579468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.579623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.579654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.579798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.579825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.579999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.580042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.580204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.580234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.580404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.580431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.580585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.580612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.580758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.580804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.580954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.580981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.581093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.581119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.581328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.581355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.581506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.581532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.581672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.581699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.581899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.581930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.582107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.582134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.582284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.582311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.582461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.582507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.582684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.582710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.582858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.450 [2024-11-02 11:47:14.582885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.450 qpair failed and we were unable to recover it. 00:35:14.450 [2024-11-02 11:47:14.583035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.583063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.583271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.583316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.583491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.583522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.583667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.583694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.583876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.583902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.584046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.584074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.584220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.584247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.584403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.584430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.584596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.584626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.584760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.584802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.584979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.585146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.585173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.585317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.585344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.585491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.585517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.585662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.585689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.585842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.585870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.586027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.586054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.586200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.586226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.586361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.586389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.586512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.586540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.586694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.586721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.586865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.586892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.587070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.587097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.587274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.587302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.587474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.587500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.587628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.587656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.587831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.587858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.588027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.588072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.588245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.588447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.588478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.588688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.588715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.588835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.588861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.589010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.589036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.589185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.589212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.589369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.589395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.589523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.589550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.589708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.589735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.589908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.589935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.451 [2024-11-02 11:47:14.590108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.451 [2024-11-02 11:47:14.590151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.451 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.590288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.590333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.590482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.590508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.590672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.590698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.590908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.590935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.591110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.591137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.591266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.591293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.591467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.591640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.591666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.591812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.591838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.591986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.592013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.592220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.592401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.592427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.592575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.592601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.592757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.592784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.592938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.592964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.593108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.593134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.593313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.593499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.593530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.593735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.593761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.593869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.593895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.594041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.594067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.594206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.594246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.594415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.594445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.594641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.594672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.594905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.594932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.595076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.595102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.595226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.595287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.595477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.595504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.595645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.595672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.595824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.595850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.595989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.596015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.596169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.596196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.596374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.596401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.596572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.596599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.596724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.596751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.596876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.596903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.597054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.597083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.597235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.452 [2024-11-02 11:47:14.597269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.452 qpair failed and we were unable to recover it. 00:35:14.452 [2024-11-02 11:47:14.597388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.597415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.597592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.597619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.597794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.597820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.598013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.598042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.598211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.598242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.598397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.598424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.598568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.598600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.598814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.599038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.599065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.599187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.599213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.599410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.599439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.599589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.599616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.599785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.599815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.599988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.600015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.600132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.600158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.600308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.600336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.600502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.600548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.600712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.600738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.600887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.600913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.601062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.601088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.601211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.601238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.601403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.601430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.601594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.601623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.601796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.601822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.601968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.601994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.602213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.602242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.602396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.602423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.602573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.602600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.602750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.602778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.602930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.603099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.603124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.603317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.603345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.603493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.603519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.603680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.603709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.603896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.603923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.604149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.604176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.604350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.453 [2024-11-02 11:47:14.604377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.453 qpair failed and we were unable to recover it. 00:35:14.453 [2024-11-02 11:47:14.604508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.604545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.604695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.604722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.604870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.604896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.605075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.605103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.605252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.605439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.605465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.605588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.605616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.605788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.605956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.605982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.606133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.606160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.606392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.606419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.606590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.606619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.606792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.606818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.606967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.606994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.607141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.607167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.607407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.607434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.607555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.607582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.607730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.607757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.607904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.607930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.608130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.608159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.608309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.608336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.608505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.608531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.608679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.608705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.608822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.608853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.609003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.609030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.609178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.609205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.609349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.609375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.609520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.609547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.609690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.609716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.609861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.609902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.610067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.610096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.610243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.610281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.610433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.610459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.610615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.610641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.610764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.454 [2024-11-02 11:47:14.610791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.454 qpair failed and we were unable to recover it. 00:35:14.454 [2024-11-02 11:47:14.610945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.610971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.611123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.611152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.611325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.611352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.611497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.611523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.611704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.611730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.611884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.611910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.612026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.612052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.612174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.612200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.612354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.612380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.612552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.612578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.612724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.612750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.612902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.612928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.613073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.613100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.613227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.613253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.613406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.613432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.613578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.613609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.613799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.613825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.613975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.614001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.614194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.614223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.614406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.614433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.614582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.614609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.614912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.614968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.615158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.615184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.615376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.615405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.615548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.615577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.615749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.615776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.615901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.615927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.616078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.616105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.616292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.616319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.616500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.616527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.616675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.616701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.616853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.616879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.617017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.617047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.617215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.617242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.617424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.617596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.617638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.617791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.617821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.617985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.618014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.455 [2024-11-02 11:47:14.618194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.455 [2024-11-02 11:47:14.618220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.455 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.618378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.618404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.618552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.618580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.618733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.618759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.618876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.618906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.619054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.619080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.619195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.619222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.619385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.619412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.619561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.619588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.619699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.619726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.619875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.619901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.620072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.620101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.620248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.620282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.620420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.620446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.620574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.620601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.620786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.620812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.620960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.620986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.621150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.621179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.621331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.621362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.621523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.621552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.621724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.621750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.621902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.621944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.622110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.622136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.622305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.622333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.622458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.622484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.622636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.622663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.622815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.622842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.622962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.622988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.623110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.623136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.623262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.623290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.623498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.623524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.623677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.623703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.623826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.623852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.623991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.624018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.624173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.624391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.624417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.624564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.456 [2024-11-02 11:47:14.624720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.456 [2024-11-02 11:47:14.624746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.456 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.624889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.624915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.625039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.625065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.625219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.625246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.625525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.625552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.625730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.625773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.625941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.625968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.626132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.626162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.626344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.626371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.626488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.626515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.626668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.626694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.626843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.626869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.627043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.627070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.627218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.627244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.627374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.627401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.627519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.627545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.627696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.627722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.627868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.627897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.628087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.628113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.628285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.628312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.628438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.628464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.628619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.628663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.628812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.628843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.629040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.629066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.629206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.629237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.629380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.629409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.629569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.629598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.629746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.629774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.629920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.629947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.630128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.630155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.630302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.630329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.630448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.630474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.630620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.630647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.630816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.630844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.631037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.631063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.631211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.631242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.631364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.631564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.631590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.631741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.631768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.457 [2024-11-02 11:47:14.631942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.457 [2024-11-02 11:47:14.631968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.457 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.632118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.632144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.632323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.632350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.632497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.632524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.632682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.632708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.632826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.632852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.633003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.633031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.633183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.633411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.633438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.633606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.633634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.633777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.633806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.633975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.634005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.634169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.634195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.634380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.634408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.634623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.634649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.634770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.634796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.634934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.634960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.635132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.635162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.635351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.635381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.635563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.635590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.635762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.635789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.635994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.636021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.636195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.636222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.636363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.636395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.636545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.636571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.636717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.636761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.636932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.636959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.637131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.637175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.637326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.637353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.637529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.637556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.637770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.637796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.637971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.637998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.638159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.638186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.638359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.638386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.458 [2024-11-02 11:47:14.638530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.458 [2024-11-02 11:47:14.638572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.458 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.638733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.638763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.638908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.638936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.639079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.639109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.639244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.639280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.639423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.639450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.639604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.639631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.639788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.639818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.639953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.639984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.640148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.640179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.640364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.640391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.640534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.640560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.640745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.640775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.640936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.640966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.641129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.641156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.641278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.641310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.641503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.641530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.641692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.641735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.641898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.641926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.642089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.642116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.642266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.642321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.642509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.642535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.642690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.642717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.642887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.642917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.643119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.643146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.643338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.643368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.643532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.643569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.643697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.643724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.643873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.643900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.644075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.644105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.644277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.644310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.644437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.644463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.644616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.644643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.644825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.644855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.645048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.645075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.645251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.645303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.645439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.645468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.645632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.645661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.645812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.645839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.459 qpair failed and we were unable to recover it. 00:35:14.459 [2024-11-02 11:47:14.645991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.459 [2024-11-02 11:47:14.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.646182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.646211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.646383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.646410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.646539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.646567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.646684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.646711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.646889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.646916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.647072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.647099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.647268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.647325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.647477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.647503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.647697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.647724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.647876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.647903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.648042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.648069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.648223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.648249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.648413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.648439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.648578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.648608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.648774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.648801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.649039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.649092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.649267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.649303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.649455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.649486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.649641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.649669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.649858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.649888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.650032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.650062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.650194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.650225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.650424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.650451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.650575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.650602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.650777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.650806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.650985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.651012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.651186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.651213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.651436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.651463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.651584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.651611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.651759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.651785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.651938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.651965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.652103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.652134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.652298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.652328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.652517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.652546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.652713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.652740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.652865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.652914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.653103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.653133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.653329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.460 [2024-11-02 11:47:14.653359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.460 qpair failed and we were unable to recover it. 00:35:14.460 [2024-11-02 11:47:14.653543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.653578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.653700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.653727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.653868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.653900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.654106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.654136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.654347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.654374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.654520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.654550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.654710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.654758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.654908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.654950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.655152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.655179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.655308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.655336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.655466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.655493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.655639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.655667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.655812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.655839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.656032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.656062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.656239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.656433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.656460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.656582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.656610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.656785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.656962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.657007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.657141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.657171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.657344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.657371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.657540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.657579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.657761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.657791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.657950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.657979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.658150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.658177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.658320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.658347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.658492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.658521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.658664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.658694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.658861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.658888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.659051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.659081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.659272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.659307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.659480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.659507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.659658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.659685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.659809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.659841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.660039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.660068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.660199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.660230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.660411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.660439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.660599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.660629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.660831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.461 [2024-11-02 11:47:14.660858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.461 qpair failed and we were unable to recover it. 00:35:14.461 [2024-11-02 11:47:14.661050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.661080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.661250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.661283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.661452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.661479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.661633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.661660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.661780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.661808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.661983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.662010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.662183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.662210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.662369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.662396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.662590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.662623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.662792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.662819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.662959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.663002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.663177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.663204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.663359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.663386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.663536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.663571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.663710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.663736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.663909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.663936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.664090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.664134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.664328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.664355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.664480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.664506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.664645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.664677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.664811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.664841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.665039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.665066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.665192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.665219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.665353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.665379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.665539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.665571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.665745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.665772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.665895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.665922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.666096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.666139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.666328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.666357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.666520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.666546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.666710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.666741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.666928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.666958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.667138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.667166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.667340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.667367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.667492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.667519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.667692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.667722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.667842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.667872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.668043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.668070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.668223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.462 [2024-11-02 11:47:14.668250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.462 qpair failed and we were unable to recover it. 00:35:14.462 [2024-11-02 11:47:14.668410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.668437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.668583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.668614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.668788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.668815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.668980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.669010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.669197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.669227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.669395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.669422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.669568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.669595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.669733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.669761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.669946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.669974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.670125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.670152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.670318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.670346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.670458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.670502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.670675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.670705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.670894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.670923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.671067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.671095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.671270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.671298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.671438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.671465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.671639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.671666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.671814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.671841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.671989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.672034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.672232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.672265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.672395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.672422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.672549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.672576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.672723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.672754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.672907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.672952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.673114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.673142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.673274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.673302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.673454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.673481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.673630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.673807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.673851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.674020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.674048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.674169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.674212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.674385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.674415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.674600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.463 [2024-11-02 11:47:14.674630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.463 qpair failed and we were unable to recover it. 00:35:14.463 [2024-11-02 11:47:14.674802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.674830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.675003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.675187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.675214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.675389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.675417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.675544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.675571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.675700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.675728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.675893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.675922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.676082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.676112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.676286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.676314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.676444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.676471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.676596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.676623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.676779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.676809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.676954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.676982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.677158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.677201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.677378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.677405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.677531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.677558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.677718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.677749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.677936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.677966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.678095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.678125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.678272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.678302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.678468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.678495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.678615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.678658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.678834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.679000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.679027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.679220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.679248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.679387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.679414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.679596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.679623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.679794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.679821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.679987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.680014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.680194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.680232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.680408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.680435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.680600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.680629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.680781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.680808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.681002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.681031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.681209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.681236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.681445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.681473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.681647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.681673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.681844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.681875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.682022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.464 [2024-11-02 11:47:14.682052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.464 qpair failed and we were unable to recover it. 00:35:14.464 [2024-11-02 11:47:14.682214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.682244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.682421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.682448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.682575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.682620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.682745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.682776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.682954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.682981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.683105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.683133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.683269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.683453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.683483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.683620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.683650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.683812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.683839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.683951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.683996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.684173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.684201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.684351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.684379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.684502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.684529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.684655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.684682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.684832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.684859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.685004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.685033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.685181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.685208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.685365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.685393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.685605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.685635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.685780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.685810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.685981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.686008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.686179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.686209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.686382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.686409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.686533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.686561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.686706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.686734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.686881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.687071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.687101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.687239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.687277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.687424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.687451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.687576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.687603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.687749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.687776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.687921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.687951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.688103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.688131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.688284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.688312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.688463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.688493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.688617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.688645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.688784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.688811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.465 [2024-11-02 11:47:14.688956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.465 [2024-11-02 11:47:14.688982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.465 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.689132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.689162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.689325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.689355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.689494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.689521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.689669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.689696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.689894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.689921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.690045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.690071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.690191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.690223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.690374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.690420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.690587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.690617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.690756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.690785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.690966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.690993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.691154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.691183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.691349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.691379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.691541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.691571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.691765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.691792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.691930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.691959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.692126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.692156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.692291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.692321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.692489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.692516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.692712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.692742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.692948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.692976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.693130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.693175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.693320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.693347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.693466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.693494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.693642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.693672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.693834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.693866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.694036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.694074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.694223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.694273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.694416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.694445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.694615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.694645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.694840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.695038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.695068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.695261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.695291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.695447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.695482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.695684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.695710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.695852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.695883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.696070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.696100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.696228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.696276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.466 [2024-11-02 11:47:14.696423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.466 [2024-11-02 11:47:14.696450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.466 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.696610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.696637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.696816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.696843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.696988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.697014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.697162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.697189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.697309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.697336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.697473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.697499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.697653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.697680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.697831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.697858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.697987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.698014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.698189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.698219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.698412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.698440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.698593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.698619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.698784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.698814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.698976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.699006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.699191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.699220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.699372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.699400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.699524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.699551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.699701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.699728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.699929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.699956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.700105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.700132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.700263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.700291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.700434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.700465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.700614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.700644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.700827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.700854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.700990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.701019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.701157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.701187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.701353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.701381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.701499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.701526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.701702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.701729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.701880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.701925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.702065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.702095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.702267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.702294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.702421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.702465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.702636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.702666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.467 qpair failed and we were unable to recover it. 00:35:14.467 [2024-11-02 11:47:14.702837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.467 [2024-11-02 11:47:14.702864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.703015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.703042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.703234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.703272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.703440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.703469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.703630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.703659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.703829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.703856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.703998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.704027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.704205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.704232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.704418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.704445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.704570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.704597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.704744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.704788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.704931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.704961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.705150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.705177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.705323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.705351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.705477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.705505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.705674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.705700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.705828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.705855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.706004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.706031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.706176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.706206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.706350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.706378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.706524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.706569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.706732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.706758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.706875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.706902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.707033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.707060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.707222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.707252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.707425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.707452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.707592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.707635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.707792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.707821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.708011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.708040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.708182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.708208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.708343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.708371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.708489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.708517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.708660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.708689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.708864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.708890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.709014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.709041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.709186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.709213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.709435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.709463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.709609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.709637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.709832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.468 [2024-11-02 11:47:14.709862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.468 qpair failed and we were unable to recover it. 00:35:14.468 [2024-11-02 11:47:14.710026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.710056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.710196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.710226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.710397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.710424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.710603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.710630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.710802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.710832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.711000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.711028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.711199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.711226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.711413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.711444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.711606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.711636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.711769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.711800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.711962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.711990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.712119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.712162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.712361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.712389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.712538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.712565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.712687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.712714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.712887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.712916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.713101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.713134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.713280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.713311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.713478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.713505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.713650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.713695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.713857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.713887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.714048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.714077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.714304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.714332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.714478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.714505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.714684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.714713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.714876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.714906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.715090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.715116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.715288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.715320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.715456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.715486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.715661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.715688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.715845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.715872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.715985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.716027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.716162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.716192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.716342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.716370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.716495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.716522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.716681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.716708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.716857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.716887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.717053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.717083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.469 [2024-11-02 11:47:14.717226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.469 [2024-11-02 11:47:14.717253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.469 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.717384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.717412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.717564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.717591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.717742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.717785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.717954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.717981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.718128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.718160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.718311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.718339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.718483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.718526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.718726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.718753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.718883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.718914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.719052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.719082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.719230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.719263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.719420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.719447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.719566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.719754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.719797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.719945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.719975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.720141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.720171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.720350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.720378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.720496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.720523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.720680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.720709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.720858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.720885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.721006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.721035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.721152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.721180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.721334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.721362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.721526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.721553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.721703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.721729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.721900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.721930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.722094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.722128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.722305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.722333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.722522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.722552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.722707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.722734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.722880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.722907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.723072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.723099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.723279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.723307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.723432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.723459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.723649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.723679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.723854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.723881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.724031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.724057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.724186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.470 [2024-11-02 11:47:14.724213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.470 qpair failed and we were unable to recover it. 00:35:14.470 [2024-11-02 11:47:14.724387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.724415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.724564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.724591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.724732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.724763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.724950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.724980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.725169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.725198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.725399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.725427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.725571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.725631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.725772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.725802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.725963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.725993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.726155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.726182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.726307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.726349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.726482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.726511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.726704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.726733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.726882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.726908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.727033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.727060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.727262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.727292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.727426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.727456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.727606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.727633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.727784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.727811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.727956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.727983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.728146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.728173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.728329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.728356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.728488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.728515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.728661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.728688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.728853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.728882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.729048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.729075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.729193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.729219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.729365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.729395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.729560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.729590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.729786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.729813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.729966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.730010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.730154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.730183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.730335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.730482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.730509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.730661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.730695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.730825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.730868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.731010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.731039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.471 [2024-11-02 11:47:14.731175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.471 [2024-11-02 11:47:14.731201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.471 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.731349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.731377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.731496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.731523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.731696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.731726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.731865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.731892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.732056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.732085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.732248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.732285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.732426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.732453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.732578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.732605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.732726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.732768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.732968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.732997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.733167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.733196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.733364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.733392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.733536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.733579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.733773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.733800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.733914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.733939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.734124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.734151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.734280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.734326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.734465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.734495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.734680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.734710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.734870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.734897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.735044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.735071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.735220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.735271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.735467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.735494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.735639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.735671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.735832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.735876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.736013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.736044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.736217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.736244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.736415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.736442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.472 [2024-11-02 11:47:14.736566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.472 [2024-11-02 11:47:14.736612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.472 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.736746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.736775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.736925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.736953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.737103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.737130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.737324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.737355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.737493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.737523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.737669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.737695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.737818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.737845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.737968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.737995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.738116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.738158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.738359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.738387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.738499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.738526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.738674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.738718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.738871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.738901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.739048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.739075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.739202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.739226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.739350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.739375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.739518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.739543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.739691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.739732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.739924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.739948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.740067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.740092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.740221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.740246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.740398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.740577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.740602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.740800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.740827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.741002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.741029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.741156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.741183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.741368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.741393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.741516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.741542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.741721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.741761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.741933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.741968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.742164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.742190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.742370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.742398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.742568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.742759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.742789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.742959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.742984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.743109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.473 [2024-11-02 11:47:14.743135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.473 qpair failed and we were unable to recover it. 00:35:14.473 [2024-11-02 11:47:14.743285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.743311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.743424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.743450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.743621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.743646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.743780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.743807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.743960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.743987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.744157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.744185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.744383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.744408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.744556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.744583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.744769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.744797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.744974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.744999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.745146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.745174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.745373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.745400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.745519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.745562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.745699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.745728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.745897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.745924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.746087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.746116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.746252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.746306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.746492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.746521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.746667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.746693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.746817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.746844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.746992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.747023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.747181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.747210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.747362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.747390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.747539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.747583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.747761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.747791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.747948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.747977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.748145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.748172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.748367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.748398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.748540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.748570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.748751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.748780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.748955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.748982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.749146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.474 [2024-11-02 11:47:14.749176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.474 qpair failed and we were unable to recover it. 00:35:14.474 [2024-11-02 11:47:14.749315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.749345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.749496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.749523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.749673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.749700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.749865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.749894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.750033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.750074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.750233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.750270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.750443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.750469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.750665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.750694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.750858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.750887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.751058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.751087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.751262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.751301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.751464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.751493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.751657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.751686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.752212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.752248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.752436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.752462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.752617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.752644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.752822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.752852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.753028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.753055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.753199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.753225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.753406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.753436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.753570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.753600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.753752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.753782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.753933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.753960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.754129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.754159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.754319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.754346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.754485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.754512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.754664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.754690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.754893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.754923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.755115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.755143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.755309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.755339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.755536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.755563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.755724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.755753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.755895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.755924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.756065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.756095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.756244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.756279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.756481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.475 [2024-11-02 11:47:14.756510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.475 qpair failed and we were unable to recover it. 00:35:14.475 [2024-11-02 11:47:14.756699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.756726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.756930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.756960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.757141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.757298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.757348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.757496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.757526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.757690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.757854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.757881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.758043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.758073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.758284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.758312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.758504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.758533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.758705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.758732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.758873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.758900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.759030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.759060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.759251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.759289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.759456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.759482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.759595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.759641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.759780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.759810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.759962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.759991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.760143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.760169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.760297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.760342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.760520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.760553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.760722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.760753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.760900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.760928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.761065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.761092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.761269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.761312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.761455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.761485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.761660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.761687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.761837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.761864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.761994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.762037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.762199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.762230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.762386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.762414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.762568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.762594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.476 [2024-11-02 11:47:14.762770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.476 [2024-11-02 11:47:14.762801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.476 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.762941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.762970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.763136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.763165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.763350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.763377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.763495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.763532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.763760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.763787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.763907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.763934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.764058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.764084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.764244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.764304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.764457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.764486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.764620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.764646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.764791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.764817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.764955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.764982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.765134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.765177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.765342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.765369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.765496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.765552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.765750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.765780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.765943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.765972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.766119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.766146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.766271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.766308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.766467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.766494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.766692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.766738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.766911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.766940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.767050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.767092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.767269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.767325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.767451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.767478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.767631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.767657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.767805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.767848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.768046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.768073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.768238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.768277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.768439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.477 [2024-11-02 11:47:14.768466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.477 qpair failed and we were unable to recover it. 00:35:14.477 [2024-11-02 11:47:14.768590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.768639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.768780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.768810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.768951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.768980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.769198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.769228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.769386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.769412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.769526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.769574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.769730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.769934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.769960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.770113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.770140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.770286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.770313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.770463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.770489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.770637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.770663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.770872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.770929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.771084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.771114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.771279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.771309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.771474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.771500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.771625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.771670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.771876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.771903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.772076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.772105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.772284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.772321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.772497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.772550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.772731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.772758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.772909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.772935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.773109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.773136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.773267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.773294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.773414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.773441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.773574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.773602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.773721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.773747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.773860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.773887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.774040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.774067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.774231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.774278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.774466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.774493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.774653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.774679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.774797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.774824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.478 qpair failed and we were unable to recover it. 00:35:14.478 [2024-11-02 11:47:14.775016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.478 [2024-11-02 11:47:14.775042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.775189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.775215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.775366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.775393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.775514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.775541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.775730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.775757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.775893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.775920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.776037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.776079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.776207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.776237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.776399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.776426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.776574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.776601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.776750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.776781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.776931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.776959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.777101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.777128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.777313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.777340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.777493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.777546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.777708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.777737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.777913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.777940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.778111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.778140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.778342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.778369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.778526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.778553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.778746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.778793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.778969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.778997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.779146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.779175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.779364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.779391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.779553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.779580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.779739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.779766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.779913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.779941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.780067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.780094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.780265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.780306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.780497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.780534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.780700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.780729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.780926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.780952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.781125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.781151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.781284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.781319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.781469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.781495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.479 qpair failed and we were unable to recover it. 00:35:14.479 [2024-11-02 11:47:14.781672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.479 [2024-11-02 11:47:14.781702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.781864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.781895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.782037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.782069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.782199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.782226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.782360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.782387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.782515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.782541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.782714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.782741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.782876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.782905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.783070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.783099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.783237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.783275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.783465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.783492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.783670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.783699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.783824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.783854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.783977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.784006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.784183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.784209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.784354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.784381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.784548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.784574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.784716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.784743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.784896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.784923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.785044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.785070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.785273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.785307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.785453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.785497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.785661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.785687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.785893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.785946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.786081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.786112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.786268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.786308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.786468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.786494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.786671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.786701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.786839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.786868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.787027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.787061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.787240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.787273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.787401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.787427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.787589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.787618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.787754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.787784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.787952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.787978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.788122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.788149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.788296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.480 [2024-11-02 11:47:14.788323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.480 qpair failed and we were unable to recover it. 00:35:14.480 [2024-11-02 11:47:14.788436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.788462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.788598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.788624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.788788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.788817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.788951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.788981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.789150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.789179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.789341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.789367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.789516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.789542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.789742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.789771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.789901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.789930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.790091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.790118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.790294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.790328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.790490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.790520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.790654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.790683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.790819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.790845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.791013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.791042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.791241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.791274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.791453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.791484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.791683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.791710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.791890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.791938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.792078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.792107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.792297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.792324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.792475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.792501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.792662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.792689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.792838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.792865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.792982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.793009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.793175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.793205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.793383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.793411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.793566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.793609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.793774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.793804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.793954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.793980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.794110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.794137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.794301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.794328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.794479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.794532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.794672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.794700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.794876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.794920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.795085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.795114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.795302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.795333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.795495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.795532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.795687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.481 [2024-11-02 11:47:14.795756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.481 qpair failed and we were unable to recover it. 00:35:14.481 [2024-11-02 11:47:14.795928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.795955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.796104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.796147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.796294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.796321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.796473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.796499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.796645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.796672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.796823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.796850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.797001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.797031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.797170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.797199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.797386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.797413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.797547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.797573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.797696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.797723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.797845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.797873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.798023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.798050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.798223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.798252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.798452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.798478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.798632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.798658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.798782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.798809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.798975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.799001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.799149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.799177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.799326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.799353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.799506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.799549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.799733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.799766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.799941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.799968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.800090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.800134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.800286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.800316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.482 qpair failed and we were unable to recover it. 00:35:14.482 [2024-11-02 11:47:14.800500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.482 [2024-11-02 11:47:14.800530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.800715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.800742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.800900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.800926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.801090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.801119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.801288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.801315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.801465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.801492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.801642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.801668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.801790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.801817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.801969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.801996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.802186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.802212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.802391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.802421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.802565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.802594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.802786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.802813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.802935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.802962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.803110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.803136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.803289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.803334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.803491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.803534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.803681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.803708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.803850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.803892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.804049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.804094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.804215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.804241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.804415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.804442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.804586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.804613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.804818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.804849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.804970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.804996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.805173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.805202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.805352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.805379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.805505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.805547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.805736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.805766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.805914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.805941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.806067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.806094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.806243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.806283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.806431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.806460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.806614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.806641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.806783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.806825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.806963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.806992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.807130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.807160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.807328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.807356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.807526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.483 [2024-11-02 11:47:14.807555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.483 qpair failed and we were unable to recover it. 00:35:14.483 [2024-11-02 11:47:14.807679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.807708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.807837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.807866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.808036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.808063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.808254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.808291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.808431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.808460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.808617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.808646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.808812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.808840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.808955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.808981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.809120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.809150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.809284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.809315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.809489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.809516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.809714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.809761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.809910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.809940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.810104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.810133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.810307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.810334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.810476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.810503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.810680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.810709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.810903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.810930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.811056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.811082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.811225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.811252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.811448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.811475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.811587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.811614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.811752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.811779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.811945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.811974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.812133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.812162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.812310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.812341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.812485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.812512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.812702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.812731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.812891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.812920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.813070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.813100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.813291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.813354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.813523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.813550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.813701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.813933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.813959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.814113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.814140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.814284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.814315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.814451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.814482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.814641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.814668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.484 qpair failed and we were unable to recover it. 00:35:14.484 [2024-11-02 11:47:14.814838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.484 [2024-11-02 11:47:14.814864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.815011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.815041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.815215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.815242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.815375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.815402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.815572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.815599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.815719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.815746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.815853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.815880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.816008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.816045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.816209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.816235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.816362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.816405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.816538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.816568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.816745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.816771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.816884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.816911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.817065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.817092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.817219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.817253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.817445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.817475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.817657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.817683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.817883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.817931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.818116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.818145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.818319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.818347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.818492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.818519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.818678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.818705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.818906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.818936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.819065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.819094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.819266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.819294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.819412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.819456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.819623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.819652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.819812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.819843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.820010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.820038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.820165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.820191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.820353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.820397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.820575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.820602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.820749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.820776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.820915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.820942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.821141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.821174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.821351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.821381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.821547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.821573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.821716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.821932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.485 [2024-11-02 11:47:14.821962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.485 qpair failed and we were unable to recover it. 00:35:14.485 [2024-11-02 11:47:14.822129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.822159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.822296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.822326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.822513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.822548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.822713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.822739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.822887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.822913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.823072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.823098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.823220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.823247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.823409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.823440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.823636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.823663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.823810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.823837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.823962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.823988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.824150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.824347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.824388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.824546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.824574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.824697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.824724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.486 [2024-11-02 11:47:14.824897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.486 [2024-11-02 11:47:14.824929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.486 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.825134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.825164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.825325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.825353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.825516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.825547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.825751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.825778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.825950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.825980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.826175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.826202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.826361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.826391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.826556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.826586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.826752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.826784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.826958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.826985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.827124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.827169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.827332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.827362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.827534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.827565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.827736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.827776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.827897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.827923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.828043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.828071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.828230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.828265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.828415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.828442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.828561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.828605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.828743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.828773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.828933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.828965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.829120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.829148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.829303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.829353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.829514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.829544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.829706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.829736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.829 [2024-11-02 11:47:14.829900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.829 [2024-11-02 11:47:14.829927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.829 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.830070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.830115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.830289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.830320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.830481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.830513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.830694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.830722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.830873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.830901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.831024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.831051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.831195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.831222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.831345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.831372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.831494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.831521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.831727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.831754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.831903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.831946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.832103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.832132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.832275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.832320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.832437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.832463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.832658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.832687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.832907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.832934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.833072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.833101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.833239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.833276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.833480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.833507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.833628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.833655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.833803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.833830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.833988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.834032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.834199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.834226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.834390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.834417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.834592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.834640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.834836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.834862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.835018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.835060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.835253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.835290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.835411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.835438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.835575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.835602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.835750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.835778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.835902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.835929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.836073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.836103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.836237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.836277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.836421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.836448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.836569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.836595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.836767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.836797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.830 qpair failed and we were unable to recover it. 00:35:14.830 [2024-11-02 11:47:14.836933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.830 [2024-11-02 11:47:14.836962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.837147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.837174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.837323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.837351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.837474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.837502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.837647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.837674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.837834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.837864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.837995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.838022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.838171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.838198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.838324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.838350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.838499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.838529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.838693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.838720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.838833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.838876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.839015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.839044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.839217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.839244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.839377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.839404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.839531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.839559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.839677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.839704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.839854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.840023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.840054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.840194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.840223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.840368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.840398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.840555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.840585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.840731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.840758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.840883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.840910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.841119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.841146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.841270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.841298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.841527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.841553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.841708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.841757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.841919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.841949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.842091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.842120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.842321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.842349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.842467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.842494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.842676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.842703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.842839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.842866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.843015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.843042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.843168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.843195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.843349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.843393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.843541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.843570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.843766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.831 [2024-11-02 11:47:14.843793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.831 qpair failed and we were unable to recover it. 00:35:14.831 [2024-11-02 11:47:14.843937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.843966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.844115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.844142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.844300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.844327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.844487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.844514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.844630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.844656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.844838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.844868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.845042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.845074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.845226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.845253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.845406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.845436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.845586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.845613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.845739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.845766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.845892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.845919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.846105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.846135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.846289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.846320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.846460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.846490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.846624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.846651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.846776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.846802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.846949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.846976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.847147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.847176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.847348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.847376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.847524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.847569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.847738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.847767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.847929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.847958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.848096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.848123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.848276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.848303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.848458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.848485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.848607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.848634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.848774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.848801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.848949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.848975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.849109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.849140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.849308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.849336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.849488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.849515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.849700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.849730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.849899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.849926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.850053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.850080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.850284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.850312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.850477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.850507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.832 [2024-11-02 11:47:14.850683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.832 [2024-11-02 11:47:14.850709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.832 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.850863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.850890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.851039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.851066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.851244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.851284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.851410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.851437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.851602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.851629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.851780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.851807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.851960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.851987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.852149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.852178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.852331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.852550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.852577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.852733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.852780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.852909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.852940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.853133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.853163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.853334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.853361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.853521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.853550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.853722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.853749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.853900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.853927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.854098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.854127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.854300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.854327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.854475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.854654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.854684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.854831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.854859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.855005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.855050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.855198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.855228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.855407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.855434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.855554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.855580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.855776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.855806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.855938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.855967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.856090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.856120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.856291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.856318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.856441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.833 [2024-11-02 11:47:14.856468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.833 qpair failed and we were unable to recover it. 00:35:14.833 [2024-11-02 11:47:14.856646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.856690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.856858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.856886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.857000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.857027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.857153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.857180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.857382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.857411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.857599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.857631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.857781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.857808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.857936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.857981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.858149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.858175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.858320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.858347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.858501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.858528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.858672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.858714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.858916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.858943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.859065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.859091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.859272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.859302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.859472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.859498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.859670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.859700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.859866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.859896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.860068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.860095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.860252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.860302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.860447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.860476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.860635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.860664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.860834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.860861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.861010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.861037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.861208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.861234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.861440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.861466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.861595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.861623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.861798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.861828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.861992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.862187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.862217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.862403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.862430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.862552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.862595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.862768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.862800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.862924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.862951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.863088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.863115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.863227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.863266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.863420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.863448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.834 qpair failed and we were unable to recover it. 00:35:14.834 [2024-11-02 11:47:14.863652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.834 [2024-11-02 11:47:14.863681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.863828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.863856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.864008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.864035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.864186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.864213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.864403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.864433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.864576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.864603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.864738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.864764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.864923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.864968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.865141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.865168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.865301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.865330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.865483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.865510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.865700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.865877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.865904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.866032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.866059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.866224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.866262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.866398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.866428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.866578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.866607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.866768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.866795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.866945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.866972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.867098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.867125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.867315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.867343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.867467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.867494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.867611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.867642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.867804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.867831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.868003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.868033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.868172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.868199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.868339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.868366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.868538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.868761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.868791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.868959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.868986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.869133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.869160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.869314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.869350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.869516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.869542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.869686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.869713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.869863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.869890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.870033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.870060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.870229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.870272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.870447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.870473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.870608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.835 [2024-11-02 11:47:14.870634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.835 qpair failed and we were unable to recover it. 00:35:14.835 [2024-11-02 11:47:14.870803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.870833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.870974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.871003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.871186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.871215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.871375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.871403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.871528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.871574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.871748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.871775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.871925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.871951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.872120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.872149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.872353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.872380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.872529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.872555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.872737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.872764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.872895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.872922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.873036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.873063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.873207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.873236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.873420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.873447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.873602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.873629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.873753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.873796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.873989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.874136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.874287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.874454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.874654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.874827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.874962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.874989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.875139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.875169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.875340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.875369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.875490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.875517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.875668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.875712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.875850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.875879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.876064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.876093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.876271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.876299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.876427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.876454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.876573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.876600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.876718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.876744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.876894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.876921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.877051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.877077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.877233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.877311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.877487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.836 [2024-11-02 11:47:14.877514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.836 qpair failed and we were unable to recover it. 00:35:14.836 [2024-11-02 11:47:14.877697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.877723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.877857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.877901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.878056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.878098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.878213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.878240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.878375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.878402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.878577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.878604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.878786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.878815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.878939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.878968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.879112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.879139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.879309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.879339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.879519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.879546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.879696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.879723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.879932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.879958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.880074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.880105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.880230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.880263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.880474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.880650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.880677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.880802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.880829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.880953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.880979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.881173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.881200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.881351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.881378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.881526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.881571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.881765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.881794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.881948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.881977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.882145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.882172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.882366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.882396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.882570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.882600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.882782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.882813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.882981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.883008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.883159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.883186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.883388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.883418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.883557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.883586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.883728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.883755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.883908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.883935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.884115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.884142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.884290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.884317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.884477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.884504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.884653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.884699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.884893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.837 [2024-11-02 11:47:14.884920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.837 qpair failed and we were unable to recover it. 00:35:14.837 [2024-11-02 11:47:14.885052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.885097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.885279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.885310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.885452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.885482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.885616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.885646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.885790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.885820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.885960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.885987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.886100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.886126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.886268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.886298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.886447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.886476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.886645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.886672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.886819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.886847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.887016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.887042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.887188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.887215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.887408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.887435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.887577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.887621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.887775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.887802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.887978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.888022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.888195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.888222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.888379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.888406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.888528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.888556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.888766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.888792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.888946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.888972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.889083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.889110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.889333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.889362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.889481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.889508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.889632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.889659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.889803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.889830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.890006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.890035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.890188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.890215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.890383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.890414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.890582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.890617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.890780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.890810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.890981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.891008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.891156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.838 [2024-11-02 11:47:14.891183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.838 qpair failed and we were unable to recover it. 00:35:14.838 [2024-11-02 11:47:14.891360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.891387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.891560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.891590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.891761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.891792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.891941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.891967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.892117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.892350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.892381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.892541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.892570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.892725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.892752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.892907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea630 is same with the state(6) to be set 00:35:14.839 [2024-11-02 11:47:14.893118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.893163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.893305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.893339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.893515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.893544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.893698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.893726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.893892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.893920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.894047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.894076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.894192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.894220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.894386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.894417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.894616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.894643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.894807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.894837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.895000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.895030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.895174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.895201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.895353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.895400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.895539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.895569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.895713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.895740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.895867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.895894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.896017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.896044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.896167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.896193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.896315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.896360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.896513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.896541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.896695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.896721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.896895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.896922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.897099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.897144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.897314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.897341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.897489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.897516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.897643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.897671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.897824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.898018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.839 [2024-11-02 11:47:14.898048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.839 qpair failed and we were unable to recover it. 00:35:14.839 [2024-11-02 11:47:14.898247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.898282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.898411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.898438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.898564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.898606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.898767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.898796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.898936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.898963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.899104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.899131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.899319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.899348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.899550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.899577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.899702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.899729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.899870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.899896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.900026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.900053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.900187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.900229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.900400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.900429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.900592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.900620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.900769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.900796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.900942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.900970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.901118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.901145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.901297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.901325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.901490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.901520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.901720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.901746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.901934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.901963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.902126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.902156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.902321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.902349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.902484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.902514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.902674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.902704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.902899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.902926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.903053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.903081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.903225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.903263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.903415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.903444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.903564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.903591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.903721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.903749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.903927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.903955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.840 [2024-11-02 11:47:14.904123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.840 [2024-11-02 11:47:14.904155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.840 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.904317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.904348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.904520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.904547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.904663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.904842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.904869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.905049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.905075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.905242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.905290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.905452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.905497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.905646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.905674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.905827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.905854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.906136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.906166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.906317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.906345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.906485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.906530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.906714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.906741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.906869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.906897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.907051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.907097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.907283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.907310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.907430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.907457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.907585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.907611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.907734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.907762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.907912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.907940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.908112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.908144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.908343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.908389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.908544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.908575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.908712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.908741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.908973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.909000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.909141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.909173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.909346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.909376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.909505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.909532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.909657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.909684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.909813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.909843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.910054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.910233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.910267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.910392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.910420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.910625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.910655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.910820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.910848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.910991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.911032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.911158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.911187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.841 [2024-11-02 11:47:14.911320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.841 [2024-11-02 11:47:14.911349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.841 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.911496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.911523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.911671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.911718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.911888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.911915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.912072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.912101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.912270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.912300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.912444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.912471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.912611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.912656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.912827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.912856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.912975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.913150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.913298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.913447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.913599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.913766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.913916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.913944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.914069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.914115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.914275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.914320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.914463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.914490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.914641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.914668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.914819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.914866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.915010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.915037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.915151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.915179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.915336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.915364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.915506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.915533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.915659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.915687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.915868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.915914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.916062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.916088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.916280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.916311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.916447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.916474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.916627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.916654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.916795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.916822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.917010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.917037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.917169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.917196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.917369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.917412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.917594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.917625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.917773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.917806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.917954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.918001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.918140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.918170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.842 [2024-11-02 11:47:14.918367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.842 [2024-11-02 11:47:14.918395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.842 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.918510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.918537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.918674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.918704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.918875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.918902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.919061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.919090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.919272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.919331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.919485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.919515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.919671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.919701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.919820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.919849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.920006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.920034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.920162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.920191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.920324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.920353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.920501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.920529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.920675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.920703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.920852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.920880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.921005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.921033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.921150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.921178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.921325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.921354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.921504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.921532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.921766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.921793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.921951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.921980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.922163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.922191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.922355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.922383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.922532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.922575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.922722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.922751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.922905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.922933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.923061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.923088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.923266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.923313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.923491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.923518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.923706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.923734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.923884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.923913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.924080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.924121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.924279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.924310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.924434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.843 [2024-11-02 11:47:14.924461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.843 qpair failed and we were unable to recover it. 00:35:14.843 [2024-11-02 11:47:14.924578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.924606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.924738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.924765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.924946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.924973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.925200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.925233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.925357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.925385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.925518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.925546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.925674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.925701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.925847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.925876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.926061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.926204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.926364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.926538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.926696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.926868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.926992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.927019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.927161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.927225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.927394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.927423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.927581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.927608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.927742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.927768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.927916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.927943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.928080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.928111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.928297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.928326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.928490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.928530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.928654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.928683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.928808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.928851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.929005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.929052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.929206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.929233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.929361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.929514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.929559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.929726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.929754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.929899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.930081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.930108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.930301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.930331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.930462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.930488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.930633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.930660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.930820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.930847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.931024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.931053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.931227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.931280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.844 qpair failed and we were unable to recover it. 00:35:14.844 [2024-11-02 11:47:14.931421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.844 [2024-11-02 11:47:14.931450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.931602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.931631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.931779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.931807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.931959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.931987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.932120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.932147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.932306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.932336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.932510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.932552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.932708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.932738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.932960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.933008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.933145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.933191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.933317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.933346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.933471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.933500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.933677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.933724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.933885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.933934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.934097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.934129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.934272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.934319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.934443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.934470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.934660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.934690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.934827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.934858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.935082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.935131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.935277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.935308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.935468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.935497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.935680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.935708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.935873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.935922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.936100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.936129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.936267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.936295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.936461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.936506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.936648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.936691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.936840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.936874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.937020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.937048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.937165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.937194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.937398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.937454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.937599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.937637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.937852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.937899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.938069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.938096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.938263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.938304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.938461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.938495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.938740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.938769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.938922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.845 [2024-11-02 11:47:14.938969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.845 qpair failed and we were unable to recover it. 00:35:14.845 [2024-11-02 11:47:14.939168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.939198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.939339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.939367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.939483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.939510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.939719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.939749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.939878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.939908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.940067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.940097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.940235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.940439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.940466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.940648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.940677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.940815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.940846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.941043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.941073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.941247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.941302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.941430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.941458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.941601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.941629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.941760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.941793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.942010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.942062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.942231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.942266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.942420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.942449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.942629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.942659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.942822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.942852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.943044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.943094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.943318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.943358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.943503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.943545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.943730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.943766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.943953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.944002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.944152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.944179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.944325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.944353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.944501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.944528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.944673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.944723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.944924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.944954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.945102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.945130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.945301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.945330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.945462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.945491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.945617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.945651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.945777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.945804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.945949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.945976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.946118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.946144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.946314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.846 [2024-11-02 11:47:14.946344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.846 qpair failed and we were unable to recover it. 00:35:14.846 [2024-11-02 11:47:14.946526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.946573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.946746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.946792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.946979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.947009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.947198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.947225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.947382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.947427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.947562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.947606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.947772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.947807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.947987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.948014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.948189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.948216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.948369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.948402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.948542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.948572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.948742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.948772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.948902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.948932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.949099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.949130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.949297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.949338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.949497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.949529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.949671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.949702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.949887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.949919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.950082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.950112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.950263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.950292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.950449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.950493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.950682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.950728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.950892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.950924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.951088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.951116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.951268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.951297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.951433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.951479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.951678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.951725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.951932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.951986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.952112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.952140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.952334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.952381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.952500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.952528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.952673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.952700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.952872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.952899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.953021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.847 [2024-11-02 11:47:14.953048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.847 qpair failed and we were unable to recover it. 00:35:14.847 [2024-11-02 11:47:14.953201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.953229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.953420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.953453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.953603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.953630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.953780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.953808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.953982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.954010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.954156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.954184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.954325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.954371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.954523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.954567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.954741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.954787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.954940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.954967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.955110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.955136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.955297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.955329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.955535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.955565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.955758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.955803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.955954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.955982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.956142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.956168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.956319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.956547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.956579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.956725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.956756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.956919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.956949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.957096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.957124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.957247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.957281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.957448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.957478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.957639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.957669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.957819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.957864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.958081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.958129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.958279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.958308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.958486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.958533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.958736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.958781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.958918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.958964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.959094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.959122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.959246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.959280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.959429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.959460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.959615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.959661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.959856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.959901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.960022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.960048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.960198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.960225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.960360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.848 [2024-11-02 11:47:14.960388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.848 qpair failed and we were unable to recover it. 00:35:14.848 [2024-11-02 11:47:14.960535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.960562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.960736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.960781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.960918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.960963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.961081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.961113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.961267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.961295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.961496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.961543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.961697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.961728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.961897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.961925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.962079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.962106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.962234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.962269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.962419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.962464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.962646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.962689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.962836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.962864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.962991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.963018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.963166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.963192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.963340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.963387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.963563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.963612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.963758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.963802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.963954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.964099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.964126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.964281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.964311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.964483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.964528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.964734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.964781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.964925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.964954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.965085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.965112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.965266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.965294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.965471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.965517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.965680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.965725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.965886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.965930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.966165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.966222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.966369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.966408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.966576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.966606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.966751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.966782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.966947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.966976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.967113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.967141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.967309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.967341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.967497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.849 [2024-11-02 11:47:14.967544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.849 qpair failed and we were unable to recover it. 00:35:14.849 [2024-11-02 11:47:14.967739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.967790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.967978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.968009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.968148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.968175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.968324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.968352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.968476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.968505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.968635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.968663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.968838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.968867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.968994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.969032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.969190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.969217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.969367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.969398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.969535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.969567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.969736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.969766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.969896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.969925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.970100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.970145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.970312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.970342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.970506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.970537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.970678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.970709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.970878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.970908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.971089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.971136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.971294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.971321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.971471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.971498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.971690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.971736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.971938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.971967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.972104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.972134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.972309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.972339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.972461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.972489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.972609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.972634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.972759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.972787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.972980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.973008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.973126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.973154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.973282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.973319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.973491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.973518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.973642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.973670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.973788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.973821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.973996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.974024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.974171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.974199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.974357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.974404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.974571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.850 [2024-11-02 11:47:14.974823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.850 [2024-11-02 11:47:14.974855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.850 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.975020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.975071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.975206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.975232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.975391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.975422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.975610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.975655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.975872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.975904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.976038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.976076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.976216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.976245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.976399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.976428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.976606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.976655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.976882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.976929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.977044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.977071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.977203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.977232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.977389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.977438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.977606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.977659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.977807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.977851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.978002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.978030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.978175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.978338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.978384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.978552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.978597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.978756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.978800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.978997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.979042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.979197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.979227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.979402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.979443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.979647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.979679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.979868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.979898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.980061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.980092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.980251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.980306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.980429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.980456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.980633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.980663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.980832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.980861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.981023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.981053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.981214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.981244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.981425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.981451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.981582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.981613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.981771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.981801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.981951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.981978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.982141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.982168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.851 [2024-11-02 11:47:14.982330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.851 [2024-11-02 11:47:14.982372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.851 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.982551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.982583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.982749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.982781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.982976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.983007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.983176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.983206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.983359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.983386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.984629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.984666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.984835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.984867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.985037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.985068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.985221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.985248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.985379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.985406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.985591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.985650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.985868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.985917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.986065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.986111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.986241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.986276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.986426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.986454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.986582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.986610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.986769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.986804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.987016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.987061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.987238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.987277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.987393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.987422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.987584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.987615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.987770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.987816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.987947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.987980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.988137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.988172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.988351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.988379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.988543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.988573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.988763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.988793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.988952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.988983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.989127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.989156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.989311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.989356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.989502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.989533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.989705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.989760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.989920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.989950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.852 qpair failed and we were unable to recover it. 00:35:14.852 [2024-11-02 11:47:14.990113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.852 [2024-11-02 11:47:14.990143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.990292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.990320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.990468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.990495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.990667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.990697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.990870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.990900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.991079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.991106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.991225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.991253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.991413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.991440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.991562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.991589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.991714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.991758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.991892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.991922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.992126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.992156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.992307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.992335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.992453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.992480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.992654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.992684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.992867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.992897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.993068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.993129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.993279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.993313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.993466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.993494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.993657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.993705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.993914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.993970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.994134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.994164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.994318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.994347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.994495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.994522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.994695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.994724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.994903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.994950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.995105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.995135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.995310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.995337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.995459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.995486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.995691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.995742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.995907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.995937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.996075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.996105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.996286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.996315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.996438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.996466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.996613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.996643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.996828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.996857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.997015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.997045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.997202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.997232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.997403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.853 [2024-11-02 11:47:14.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.853 qpair failed and we were unable to recover it. 00:35:14.853 [2024-11-02 11:47:14.997604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.997634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.997798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.997858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.998019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.998071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.998231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.998263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.998381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.998408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.998530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.998563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.998739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.998784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.998986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.999044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.999201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.999230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.999385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.999413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.999584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.999614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.999787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:14.999837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:14.999983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.000013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.000178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.000212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.000369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.000399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.000565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.000617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.000785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.000832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.001047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.001098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.001230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.001264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.001419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.001465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.001617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.001663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.001830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.001860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.002000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.002027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.002143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.002171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.002340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.002387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.002536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.002563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.002685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.002839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.002866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.003015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.003042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.003196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.003224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.003380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.003427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.003593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.003637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.003812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.003858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.004013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.004041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.004190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.004217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.004394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.004440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.004593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.004637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.004812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.004856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.854 qpair failed and we were unable to recover it. 00:35:14.854 [2024-11-02 11:47:15.005010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.854 [2024-11-02 11:47:15.005038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.005150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.005175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.005342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.005390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.005564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.005592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.005745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.005773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.005925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.005953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.006108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.006135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.006268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.006302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.006451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.006674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.006719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.006866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.006912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.007038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.007066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.007191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.007219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.007378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.007406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.007529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.007557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.007731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.007758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.007887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.007915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.008069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.008097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.008249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.008287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.008421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.008466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.008616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.008660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.008861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.008906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.009057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.009085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.009268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.009296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.009497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.009527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.009712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.009755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.009957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.010011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.010156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.010184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.010316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.010345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.010522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.010567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.010739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.010785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.010926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.010958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.011112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.011140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.011338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.011370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.011507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.011535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.011685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.011713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.011837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.011865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.012014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.012041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.012164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.855 [2024-11-02 11:47:15.012191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.855 qpair failed and we were unable to recover it. 00:35:14.855 [2024-11-02 11:47:15.012393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.012439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.012607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.012654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.012861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.012905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.013079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.013107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.013243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.013297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.013474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.013519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.013713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.013745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.013930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.013960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.014129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.014161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.014326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.014370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.014517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.014562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.014759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.014805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.014934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.014962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.015092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.015119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.015267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.015296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.015455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.015483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.015623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.015651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.015820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.015866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.016020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.016048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.016221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.016249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.016424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.016469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.016609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.016655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.016836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.016882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.017028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.017056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.017232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.017268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.017436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.017482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.017660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.017705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.017902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.017946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.018096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.018124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.018273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.018301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.018472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.018517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.018664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.018710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.018833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.018860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.856 [2024-11-02 11:47:15.019032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.856 [2024-11-02 11:47:15.019060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.856 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.019233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.019280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.019484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.019543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.019740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.019770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.019980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.020026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.020154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.020181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.020351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.020396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.020583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.020628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.020771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.020799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.020973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.021001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.021116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.021150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.021298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.021327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.021512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.021540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.021690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.021716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.021844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.021872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.021998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.022026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.022176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.022203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.022400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.022450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.022617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.022663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.022855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.022883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.022992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.023020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.023201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.023230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.023423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.023471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.023681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.023725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.023919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.023965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.024083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.024123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.024301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.024333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.024554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.024599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.024747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.024792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.024949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.024977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.025129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.025156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.025310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.025339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.025518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.025547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.025722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.025750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.025903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.025930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.026078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.026105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.026317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.026346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.026542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.026586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.026756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.857 [2024-11-02 11:47:15.026800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.857 qpair failed and we were unable to recover it. 00:35:14.857 [2024-11-02 11:47:15.026982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.027009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.027158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.027187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.027380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.027425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.027633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.027683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.027859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.027903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.028077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.028105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.028319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.028348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.028546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.028593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.028790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.028836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.028989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.029017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.029155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.029193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.029368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.029414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.029605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.029633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.029803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.029848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.030027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.030055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.030203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.030230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.030407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.030452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.030646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.030692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.030866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.030911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.031061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.031090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.031269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.031297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.031498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.031544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.031696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.031749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.031919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.031968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.032091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.032119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.032315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.032347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.032535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.032581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.032757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.032807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.032953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.032982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.033102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.033130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.033300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.033332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.033522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.033570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.033740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.033786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.033940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.033968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.034141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.034169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.034368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.034417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.034599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.034644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.858 [2024-11-02 11:47:15.034809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.858 [2024-11-02 11:47:15.034854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.858 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.034982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.035010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.035129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.035158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.035282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.035311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.035488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.035519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.035741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.035786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.035909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.035941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.036120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.036148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.036325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.036372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.036526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.036571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.036720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.036765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.036937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.036966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.037113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.037141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.037318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.037346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.037499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.037527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.037700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.037728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.037871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.037899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.038053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.038082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.038244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.038279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.038453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.038498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.038711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.038740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.038902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.038947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.039131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.039159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.039366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.039411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.039580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.039629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.039804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.039850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.040031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.040059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.040210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.040238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.040456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.040502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.040703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.040748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.040892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.040937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.041088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.041116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.041290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.041318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.041500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.041546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.859 [2024-11-02 11:47:15.041724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.859 [2024-11-02 11:47:15.041772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.859 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.041971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.042016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.042168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.042202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.042391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.042438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.042642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.042687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.042863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.042909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.043053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.043082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.043261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.043289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.043468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.043514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.043667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.043712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.043878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.043923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.044094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.044122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.044267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.044300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.044465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.044509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.044660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.044707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.044908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.044954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.045108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.045136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.045289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.045317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.045446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.045474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.045637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.045665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.045815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.045842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.045992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.046019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.046213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.046240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.046425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.046471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.046611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.046658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.046819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.046863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.047016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.047044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.047169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.047197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.047381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.047410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.047563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.047593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.047781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.047824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.047973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.048001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.048123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.048152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.048322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.048372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.048556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.048585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.048735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.048764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.048886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.048913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.049065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.049093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.049211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.860 [2024-11-02 11:47:15.049240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.860 qpair failed and we were unable to recover it. 00:35:14.860 [2024-11-02 11:47:15.049385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.049412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.049586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.049614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.049779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.049823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.049999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.050027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.050218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.050245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.050428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.050473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.050682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.050727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.050880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.050928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.051053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.051078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.051207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.051236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.051475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.051520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.051721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.051766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.051966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.051995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.052174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.052206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.052419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.052465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.052638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.052684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.052851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.052897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.053150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.053177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.053302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.053343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.053530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.053576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.053746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.053791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.053959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.054003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.054173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.054199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.054388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.054432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.054639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.054685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.054828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.054856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.055033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.055061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.055217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.055245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.055435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.055462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.055656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.055699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.055907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.055953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.056078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.056105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.056267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.861 [2024-11-02 11:47:15.056295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.861 qpair failed and we were unable to recover it. 00:35:14.861 [2024-11-02 11:47:15.056418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.056445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.056713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.056740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.056902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.056931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.057081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.057109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.057279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.057308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.057508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.057553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.057736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.057782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.057958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.057985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.058134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.058161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.058333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.058378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.058576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.058620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.058836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.058880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.059075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.059102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.059303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.059331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.059527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.059573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.059759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.059804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.059995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.060192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.060221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.060392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.060438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.060614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.060665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.060926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.060978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.061158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.061184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.061402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.061448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.061656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.061701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.061874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.061919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.062188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.062215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.062405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.062628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.062672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.062906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.062937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.063115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.063142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.063280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.063308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.063535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.063581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.063793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.063837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.064033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.064078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.064242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.064275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.064455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.064500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.064659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.064705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.064874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.862 [2024-11-02 11:47:15.064918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.862 qpair failed and we were unable to recover it. 00:35:14.862 [2024-11-02 11:47:15.065102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.065130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.065318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.065365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.065567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.065598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.065791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.065835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.065984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.066012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.066156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.066200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.066397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.066444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.066606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.066637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.066850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.066894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.067081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.067108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.067279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.067308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.067500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.067532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.067734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.067765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.067962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.067990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.068184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.068212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.068414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.068459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.068655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.068686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.068888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.068915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.069096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.069125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.069281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.069335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.069543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.069588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.069805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.069850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.069980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.070012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.070263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.070292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.070492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.070542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.070661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.070689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.070870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.070914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.071060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.071089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.071220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.071279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.071418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.071463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.071665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.071709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.071891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.071941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.072080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.072108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.072318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.072349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.072495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.072539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.072746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.072790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.072981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.073023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.073190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.073218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.863 [2024-11-02 11:47:15.073435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.863 [2024-11-02 11:47:15.073481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.863 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.073654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.073699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.073899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.073945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.074100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.074127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.074382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.074428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.074568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.074599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.074735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.074763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.074994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.075022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.075198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.075226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.075402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.075448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.075605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.075651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.075827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.075874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.076065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.076091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.076275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.076303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.076477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.076523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.076695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.076739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.076891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.076923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.077097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.077125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.077373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.077429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.077608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.077652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.077810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.077855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.078049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.078076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.078222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.078282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.078435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.078480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.078693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.078742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.078905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.078949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.079122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.079150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.079316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.079348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.079575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.079622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.079767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.079812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.079987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.080015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.080139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.080167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.080361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.080411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.080586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.080631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.080809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.080854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.081006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.081040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.081213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.081241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.081416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.081461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.864 [2024-11-02 11:47:15.081637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.864 [2024-11-02 11:47:15.081681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.864 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.081878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.081923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.082063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.082091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.082242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.082276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.082476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.082523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.082676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.082724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.082896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.082940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.083067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.083096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.083239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.083283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.083446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.083491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.083692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.083736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.083901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.083946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.084091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.084119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.084314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.084360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.084530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.084576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.084743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.084787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.084962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.085160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.085187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.085354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.085399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.085565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.085610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.085786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.085832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.085981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.086007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.086185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.086212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.086416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.086462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.086667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.086712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.086887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.086935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.087111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.087143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.087266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.087294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.087490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.087534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.087677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.087722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.087863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.087906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.088080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.088107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.088252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.088298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.088476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.088522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.088720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.088766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.088916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.088944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.089092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.089120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.089270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.089298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.089504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.089549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.089712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.089759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.865 [2024-11-02 11:47:15.089964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-11-02 11:47:15.090009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.865 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.090158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.090185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.090348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.090392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.090534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.090578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.090753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.090797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.090941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.090969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.091129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.091156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.091318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.091349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.091563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.091607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.091735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.091779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.091954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.091982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.092158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.092184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.092348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.092379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.092573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.092814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.092844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.093033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.093061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.093206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.093234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.093452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.093497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.093695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.093726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.093913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.093943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.094128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.094185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.094361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.094390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.094519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.094563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.094779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.094809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.094951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.094981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.095145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.095174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.095355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.095383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.095539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.095582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.095711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.095742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.095872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.095902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.096089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.096118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.096319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.096346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.096579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.096635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.096796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.096827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.097017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.097047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.097231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.097282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.097438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.097467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.866 [2024-11-02 11:47:15.097666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.866 [2024-11-02 11:47:15.097711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.866 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.097911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.097956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.098101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.098129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.098303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.098334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.098557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.098602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.098800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.098830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.099016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.099065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.099185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.099213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.099393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.099438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.099608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.099651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.099826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.099870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.100020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.100049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.100201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.100228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.100403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.100431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.100637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.100886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.100917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.101083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.101116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.101307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.101338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.101505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.101548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.101716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.101760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.101911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.101939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.102086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.102114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.102237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.102273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.102463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.102509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.102708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.102753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.102902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.102929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.103054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.103080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.103227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.103265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.103434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.103478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.103617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.103661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.103867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.103912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.104089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.104115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.104280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.104309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.104494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.104522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.104722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.104768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.104938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.104966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.105120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.105147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.105270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.105298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.105450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.105477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.105648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.105675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.105791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.105819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.867 [2024-11-02 11:47:15.105964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.867 [2024-11-02 11:47:15.105992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.867 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.106139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.106166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.106334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.106381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.106558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.106603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.106764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.106808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.106930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.106956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.107132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.107160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.107284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.107311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.107456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.107500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.107669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.107711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.107861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.107905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.108053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.108080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.108228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.108262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.108463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.108493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.108656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.108699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.108903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.108992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.109104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.109131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.109292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.109321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.109530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.109575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.109733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.109761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.109909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.109936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.110085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.110267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.110295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.110493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.110538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.110738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.110782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.110930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.110957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.111116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.111144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.111289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.111317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.111491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.111536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.111716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.111762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.111929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.111956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.112105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.112133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.112328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.112375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.112548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.112594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.112771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.112818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.112968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.112996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.113142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.113169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.113333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.113381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.113556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.113606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.113810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.113854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.114000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.114027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.114179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.114206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.114399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.114444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.114643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.114675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.114841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.114872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.868 qpair failed and we were unable to recover it. 00:35:14.868 [2024-11-02 11:47:15.115125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.868 [2024-11-02 11:47:15.115152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.115331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.115359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.115510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.115554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.115714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.115744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.115908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.115939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.116103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.116134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.116319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.116347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.116498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.116526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.116694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.116723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.116875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.116905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.117098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.117134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.117270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.117314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.117456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.117484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.117652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.117683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.117838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.117867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.118025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.118056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.118228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.118265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.118428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.118456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.118603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.118631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.118801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.118832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.119038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.119083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.119268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.119298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.119473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.119500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.119682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.119712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.119926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.119983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.120163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.120192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.120371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.120400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.120572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.120600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.120786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.120816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.120975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.121004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.121173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.121199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.121350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.121378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.121520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.121547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.121748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.121778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.121968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.121998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.122123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.122152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.122362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.122404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.122557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.122592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.122749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.122795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.122942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.122972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.123161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.123189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.123314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.123343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.123493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.123520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.123690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.123737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.124016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.869 [2024-11-02 11:47:15.124073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.869 qpair failed and we were unable to recover it. 00:35:14.869 [2024-11-02 11:47:15.124198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.124228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.124411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.124441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.124631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.124660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.124850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.124879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.125033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.125062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.125220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.125271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.125454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.125501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.125646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.125690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.125864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.125908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.126060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.126087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.126235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.126271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.126421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.126466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.126596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.126641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.126814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.126859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.127020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.127064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.127237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.127273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.127408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.127453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.127620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.127663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.127887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.127940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.128071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.128099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.128251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.128310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.128475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.128520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.128725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.128753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.128905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.128933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.129107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.129135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.129284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.129319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.129495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.129538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.129715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.129758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.129921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.129952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.130100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.130128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.130267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.130306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.130436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.130464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.130642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.130678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.130887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.130932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.131083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.131112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.131285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.131321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.131469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.131496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.131701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.131732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.131963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.131994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.132161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.870 [2024-11-02 11:47:15.132191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.870 qpair failed and we were unable to recover it. 00:35:14.870 [2024-11-02 11:47:15.132358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.132385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.132533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.132561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.132708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.132774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.132960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.132999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.133207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.133237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.133435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.133475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.133695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.133727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.133962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.134011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.134172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.134202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.134380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.134407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.134600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.134630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.134875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.134928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.135090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.135130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.135274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.135319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.135476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.135504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.135653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.135680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.135860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.135890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.136033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.136085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.136253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.136307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.136428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.136478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.136640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.136669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.136858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.136888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.137050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.137081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.137276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.137323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.137471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.137498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.137724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.137788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.138000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.138030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.138187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.138216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.138408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.138450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.138622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.138668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.138864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.138909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.139107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.139159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.139287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.139314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.139482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.139528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.139823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.139878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.140140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.140192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.140386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.140432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.140602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.140647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.140795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.140840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.140963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.140990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.141106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.141133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.141266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.141296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.141423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.141450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.141596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.141640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.141782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.871 [2024-11-02 11:47:15.141826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.871 qpair failed and we were unable to recover it. 00:35:14.871 [2024-11-02 11:47:15.141999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.142025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.142207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.142234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.142464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.142508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.142721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.143026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.143079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.143271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.143297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.143473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.143500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.143759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.143812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.144046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.144077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.144244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.144302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.144458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.144486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.144631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.144658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.144827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.144857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.145041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.145071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.145238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.145273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.145439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.145466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.145616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.145662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.145804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.145834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.145996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.146027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.146201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.146228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.146367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.146394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.146547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.146574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.146796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.146860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.147147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.147177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.147352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.147379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.147494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.147547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.147719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.147746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.147917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.147944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.148125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.148155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.148320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.148347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.148494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.148523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.148723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.148774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.149006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.149072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.149229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.149265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.149437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.149463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.149622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.149650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.149833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.149863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.150032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.150062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.872 [2024-11-02 11:47:15.150250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.872 [2024-11-02 11:47:15.150283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.872 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.150435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.150462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.150639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.150668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.150836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.150864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.151031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.151061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.151266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.151305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.151457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.151484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.151663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.151693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.151855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.151885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.152050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.152079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.152252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.152291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.152419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.152446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.152594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.152621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.152784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.152813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.153029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.153082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.153273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.153312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.153481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.153519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.153702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.153778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.153995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.154048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.154192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.154219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.154381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.154409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.154584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.154611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.154780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.154809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.155063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.155115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.155298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.155325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.155498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.155525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.155810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.155862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.156052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.156081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.156282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.156317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.156464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.156490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.156638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.156664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.156817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.156844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.156986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.157013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.157187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.157213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.157363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.157393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.157555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.157585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.157753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.157780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.157974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.158004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.158160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.158190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.158441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.158468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.158662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.158692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.158849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.158880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.159043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.159075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.159270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.159457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.159488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.159683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.159713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.159883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.159911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.160079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.160109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.873 [2024-11-02 11:47:15.160286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.873 [2024-11-02 11:47:15.160322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.873 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.160485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.160523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.160648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.160692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.160856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.160885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.161061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.161088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.161240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.161435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.161465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.161669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.161696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.161859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.161889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.162091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.162118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.162246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.162281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.162457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.162487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.162702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.162729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.162849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.162875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.163017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.163061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.163250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.163289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.163457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.163484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.163625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.163651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.163773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.163800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.163974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.164000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.164190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.164219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.164424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.164452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.164599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.164625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.164746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.164777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.164893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.164919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.165046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.165200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.165244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.165447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.165476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.165642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.165668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.874 [2024-11-02 11:47:15.165813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.874 [2024-11-02 11:47:15.165839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.874 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.166010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.166053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.166225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.166251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.166406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.166436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.166604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.166633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.166823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.166850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.166976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.167002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.167149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.167175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.167347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.167389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.167581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.167610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.167763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.167791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.167962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.168006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.168133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.168160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.168306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.168484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.168512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.168666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.168694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.168875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.168903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.169045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.169072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.169221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.169249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.169410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.169438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.169563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.169591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.169741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.169774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.169921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.169950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.170077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.170104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.170279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.170307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.170479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.170523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.170657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.170702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.170852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.170879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.171028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.171056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.171201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.171229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.171409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.171454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.171617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.171662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.171819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.171847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.171971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.171999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.172173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.172201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.172387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.172432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.172627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.875 [2024-11-02 11:47:15.172659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.875 qpair failed and we were unable to recover it. 00:35:14.875 [2024-11-02 11:47:15.172824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.172854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.172988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.173018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.173203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.173233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.173442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.173472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.173631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.173661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.173802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.173831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.173955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.173986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.174146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.174173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.174324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.174353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.174482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.174509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.174669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.174699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.174861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.174897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.175023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.175053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.175222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.175249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.175388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.175415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.175588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.175617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.175809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.175838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.175989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.176019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.176146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.176175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.176321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.176348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.176495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.176522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.176666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.176696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.176921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.176950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.177119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.177149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.177320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.177347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.177522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.177548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.177717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.177746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.177904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.177934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.178091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.178122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.178262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.178293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.178433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.178460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.178607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.178633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.178747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.178792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.178949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.178979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.179226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.179263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.179401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.179428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.876 [2024-11-02 11:47:15.179545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.876 [2024-11-02 11:47:15.179572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.876 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.179711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.179740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.179899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.179933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.180125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.180154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.180297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.180324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.180469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.180495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.180639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.180668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.180830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.180860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.181049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.181078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.181243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.181296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.181457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.181484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.181605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.181632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.181781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.181807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.181949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.181978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.182161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.182190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.182360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.182387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.182540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.182570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.182730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.182760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.182921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.182951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.183085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.183115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.183263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.183308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.183488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.183515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.183695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.183751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.183908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.183938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.184091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.184120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.184306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.184347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.184510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.184540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.184705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.184754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.184893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.184923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.185115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.185167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.185290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.185318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.185491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.185519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.185695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.185740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.185919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.185965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.186091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.186118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.186247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.186281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.186446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.186490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.877 [2024-11-02 11:47:15.186652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.877 [2024-11-02 11:47:15.186697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.877 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.186841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.186885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.187061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.187089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.187266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.187294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.187418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.187445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.187577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.187604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.187780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.187825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.188003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.188047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.188176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.188204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.188419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.188464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.188662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.188694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.188831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.188861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.188998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.189028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.189189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.189218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.189412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.189443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.189616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.189646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.189815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.189842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.189993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.190022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.190159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.190185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.190328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.190361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.190530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.190574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.190699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.190728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.190892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.190922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.191064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.191108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.191281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.191308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.191431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.191458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.191603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.191630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.191838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.191868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.192026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.192055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.192220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.192249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.192441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.192482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.192639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.192684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.878 qpair failed and we were unable to recover it. 00:35:14.878 [2024-11-02 11:47:15.192859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.878 [2024-11-02 11:47:15.192906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.193092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.193136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.193283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.193311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.193447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.193477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.193635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.193679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.193853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.193898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.194024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.194052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.194181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.194209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.194374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.194402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.194522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.194550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.194702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.194729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.194878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.194906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.195029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.195057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.195181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.195209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.195391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.195437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.195633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.195663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.195854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.195899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.196029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.196057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.196176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.196204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.196372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.196418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.196551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.196596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.196747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.196791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.196938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.196965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.197089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.197117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.197277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.197305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.197467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.197513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.197673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.197718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.197866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.197900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.198046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.198074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.198228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.198261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.198435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.198481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.198640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.198683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.198847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.198893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.199026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.199053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.879 qpair failed and we were unable to recover it. 00:35:14.879 [2024-11-02 11:47:15.199200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.879 [2024-11-02 11:47:15.199229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.199410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.199457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.199607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.199651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.199823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.199853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.200023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.200052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.200178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.200207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.200404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.200450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.200635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.200681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.200802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.200832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.201014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.201041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.201173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.201200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.201375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.201421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.201602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.201652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.201832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.201877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.202030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.202069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.202196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.202222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.202400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.202445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.202621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.202667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.202826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.202872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.203027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.203055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.203203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.203236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.203407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.203453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.203633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.203684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.203853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.203899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.204042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.204071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.204279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.204330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.204505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.204536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.204705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.204751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.204923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.204969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.205105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.205133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.205266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.205305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:14.880 [2024-11-02 11:47:15.205478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.880 [2024-11-02 11:47:15.205534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:14.880 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.205701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.205748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.205896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.205924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.206052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.206081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.206218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.206271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.206452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.206495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.206625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.206658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.206811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.206855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.206999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.207030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.207177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.207207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.207367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.207395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.207564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.207594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.207787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.207823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.208092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.208122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.208293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.208320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.171 [2024-11-02 11:47:15.208495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.171 [2024-11-02 11:47:15.208521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.171 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.208665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.208695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.209011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.209070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.209248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.209304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.209476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.209516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.209656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.209686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.209859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.209890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.210102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.210130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.210266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.210306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.210485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.210540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.210679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.210727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.210986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.211040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.211160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.211187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.211349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.211378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.211561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.211612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.211784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.211829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.211949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.211976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.212106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.212134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.212279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.212310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.212450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.212495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.212660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.212705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.212827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.212856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.212985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.213012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.213179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.213207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.213391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.213440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.213597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.213641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.213785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.213831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.213947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.213976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.214153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.214181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.214341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.214381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.214539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.214570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.214744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.214771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.214937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.214964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.215087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.215114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.215289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.215337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.215539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.215586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.215735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.215779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.215919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.215964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.216116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.172 [2024-11-02 11:47:15.216145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.172 qpair failed and we were unable to recover it. 00:35:15.172 [2024-11-02 11:47:15.216275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.216315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.216486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.216513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.216661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.216710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.216834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.216862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.217033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.217062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.217184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.217211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.217373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.217549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.217589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.217815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.217866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.217994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.218025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.218195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.218221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.218372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.218400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.218543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.218573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.218739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.218769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.218942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.218972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.219142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.219178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.219370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.219400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.219577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.219622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.219792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.219837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.220010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.220055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.220181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.220210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.220372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.220418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.220570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.220601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.220790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.220819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.220982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.221011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.221180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.221207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.221347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.221388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.221559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.221597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.221738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.221768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.221920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.221950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.222122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.222148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.222299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.222329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.222473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.222520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.222717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.222761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.222913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.222957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.223109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.223137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.223314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.223342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.223468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.173 [2024-11-02 11:47:15.223497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.173 qpair failed and we were unable to recover it. 00:35:15.173 [2024-11-02 11:47:15.223687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.223716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.223857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.223886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.224055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.224085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.224292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.224334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.224543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.224596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.224826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.224879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.225051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.225097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.225223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.225250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.225435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.225463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.225674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.225732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.225904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.225949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.226101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.226129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.226312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.226343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.226535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.226580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.226741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.226786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.227040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.227098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.227250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.227284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.227409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.227437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.227612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.227645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.227813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.228002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.228032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.228205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.228233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.228381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.228408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.228538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.228565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.228754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.228783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.228934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.228965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.229086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.229115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.229265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.229293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.229411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.229438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.229584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.229614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.229780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.229811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.229961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.229991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.230157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.230187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.230379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.230420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.230567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.230596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.230799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.230851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.231080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.231126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.231248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.231281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.174 qpair failed and we were unable to recover it. 00:35:15.174 [2024-11-02 11:47:15.231403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.174 [2024-11-02 11:47:15.231431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.231662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.231717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.231865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.231892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.232042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.232087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.232241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.232282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.232413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.232442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.232615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.232665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.232813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.232846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.232993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.233024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.233168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.233211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.233393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.233420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.233551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.233581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.233736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.233767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.233915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.233960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.234126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.234156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.234303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.234331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.234468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.234498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.234656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.234687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.234850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.234879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.235100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.235159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.235304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.235336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.235508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.235554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.235765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.235817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.235991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.236035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.236153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.236180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.236339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.236384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.236549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.236595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.236737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.236782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.236915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.236960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.237227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.237254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.237409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.237454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.237614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.237642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.237774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.237819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.237975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.238002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.238122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.238151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.238316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.238347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.238529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.238559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.238677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.175 [2024-11-02 11:47:15.238874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.175 [2024-11-02 11:47:15.238904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.175 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.239040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.239071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.239270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.239306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.239482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.239533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.239696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.239742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.239894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.239939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.240054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.240083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.240210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.240238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.240421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.240471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.240638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.240669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.240859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.240905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.241050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.241077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.241235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.241269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.241414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.241460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.241633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.241666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.241848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.241875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.242026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.242071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.242231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.242267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.242421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.242448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.242594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.242623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.242761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.242791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.242934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.242963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.243108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.243137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.243286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.243326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.243498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.243545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.243735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.243781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.243985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.244030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.244157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.244184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.244335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.244385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.244528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.244572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.244693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.244721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.244885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.244930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.245057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.176 [2024-11-02 11:47:15.245085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.176 qpair failed and we were unable to recover it. 00:35:15.176 [2024-11-02 11:47:15.245248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.245289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.245454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.245485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.245716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.245772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.245912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.245941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.246103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.246133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.246278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.246315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.246434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.246460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.246673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.246703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.246884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.246913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.247061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.247089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.247276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.247319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.247467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.247494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.247639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.247669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.247808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.247839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.248009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.248068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.248205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.248239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.248406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.248434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.248609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.248775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.248804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.248981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.249027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.249180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.249208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.249417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.249448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.249613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.249657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.249911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.249958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.250082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.250107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.250275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.250311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.250476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.250529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.250706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.250736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.250890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.250918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.251071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.251111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.251242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.251283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.251480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.251522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.251685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.251854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.251884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.252014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.252044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.252229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.252263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.252397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.252424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.252596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.252642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.177 qpair failed and we were unable to recover it. 00:35:15.177 [2024-11-02 11:47:15.252782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.177 [2024-11-02 11:47:15.252828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.253007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.253051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.253198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.253226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.253420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.253617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.253664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.253838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.253869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.254013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.254186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.254213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.254363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.254408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.254581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.254611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.254773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.254819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.254937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.254965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.255085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.255113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.255263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.255303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.255453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.255499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.255643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.255671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.255820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.255848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.255996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.256028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.256181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.256209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.256365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.256393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.256604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.256634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.256813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.256864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.256990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.257019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.257191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.257219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.257418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.257467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.257647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.257691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.257866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.257915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.258062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.258090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.258238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.258273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.258427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.258472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.258658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.258691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.258862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.258892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.259090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.259119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.259265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.259293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.259415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.259441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.259642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.259672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.259972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.260024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.260200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.260226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.260373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.178 [2024-11-02 11:47:15.260401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.178 qpair failed and we were unable to recover it. 00:35:15.178 [2024-11-02 11:47:15.260572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.260601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.260728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.260757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.260915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.260945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.261134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.261163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.261344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.261371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.261507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.261543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.261672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.261703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.261875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.261904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.262090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.262119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.262251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.262286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.262410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.262437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.262559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.262585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.262775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.262805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.262978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.263008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.263147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.263176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.263342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.263370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.263492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.263520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.263658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.263687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.263831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.263865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.263993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.264023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.264186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.264212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.264364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.264391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.264548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.264578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.264708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.264738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.264960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.264989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.265113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.265142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.265298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.265325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.265444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.265471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.265637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.265667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.265808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.265838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.265981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.266025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.266196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.266223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.266369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.266396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.266519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.266546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.266695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.266722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.266859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.266890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.267070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.267100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.267248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.179 [2024-11-02 11:47:15.267281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.179 qpair failed and we were unable to recover it. 00:35:15.179 [2024-11-02 11:47:15.267436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.267463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.267614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.267640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.267783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.267811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.267977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.268007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.268265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.268311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.268436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.268463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.268620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.268649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.268872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.268901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.269043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.269073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.269243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.269305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.269479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.269505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.269643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.269672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.269801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.269831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.269976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.270002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.270125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.270152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.270332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.270363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.270555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.270581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.270709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.270735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.270936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.270966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.271110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.271137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.271316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.271351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.271471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.271498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.271653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.271680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.271832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.271876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.272038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.272068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.272212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.272239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.272388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.272414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.272566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.272592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.272777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.272804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.272959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.272986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.273108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.273135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.273304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.273332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.273521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.273550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.180 [2024-11-02 11:47:15.273728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.180 [2024-11-02 11:47:15.273754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.180 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.273917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.273944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.274072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.274099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.274231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.274267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.274451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.274479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.274601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.274627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.274775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.274808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.274954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.274980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.275125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.275152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.275301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.275329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.275448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.275474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.275595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.275622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.275806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.275833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.275958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.275984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.276123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.276150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.276351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.276381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.276549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.276575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.276740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.276769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.276954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.276984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.277130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.277157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.277307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.277359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.277491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.277521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.277720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.277748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.277942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.277972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.278116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.278325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.278353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.278554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.278580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.278709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.278740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.278930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.278957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.279129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.279159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.279324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.279364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.279510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.279536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.279689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.279717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.279897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.279934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.280103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.280129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.280244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.280294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.280458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.280487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.280632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.280659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.280786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.181 [2024-11-02 11:47:15.280814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.181 qpair failed and we were unable to recover it. 00:35:15.181 [2024-11-02 11:47:15.281010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.281038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.281158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.281184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.281311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.281338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.281528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.281558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.281714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.281740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.281889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.281932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.282100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.282130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.282329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.282357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.282527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.282558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.282728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.282754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.282866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.282892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.283039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.283067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.283214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.283243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.283424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.283451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.283579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.283607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.283741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.283768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.283932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.283959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.284134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.284329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.284356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.284516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.284543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.284689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.284718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.284874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.284904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.285093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.285119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.285283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.285333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.285521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.285549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.285681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.285708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.285862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.285889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.286037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.286067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.286234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.286272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.286418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.286448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.286636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.286666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.286817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.286844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.286969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.286997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.287198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.287238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.287406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.287436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.287547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.287592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.287767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.287794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.287923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.287950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.182 [2024-11-02 11:47:15.288059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.182 [2024-11-02 11:47:15.288086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.182 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.288212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.288239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.288376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.288403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.288571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.288600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.288773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.288800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.288922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.288948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.289079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.289106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.289288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.289333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.289487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.289514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.289664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.289709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.289914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.289960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.290132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.290159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.290274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.290302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.290429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.290456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.290631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.290658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.290805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.290848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.291002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.291031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.291208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.291240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.291366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.291393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.291557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.291759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.291786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.291903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.291946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.292077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.292107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.292246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.292282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.292409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.292436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.292588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.292615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.292741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.292768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.292932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.292963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.293141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.293186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.293383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.293412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.293541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.293588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.293832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.293863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.294029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.294056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.294207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.294233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.294363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.294391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.294545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.294686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.294713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.294859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.294887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.295039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.183 [2024-11-02 11:47:15.295065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.183 qpair failed and we were unable to recover it. 00:35:15.183 [2024-11-02 11:47:15.295237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.295274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.295426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.295455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.295582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.295609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.295769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.295796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.295948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.295975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.296102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.296133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.296266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.296293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.296435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.296462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.296605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.296632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.296756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.296801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.296961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.296991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.297161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.297188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.297331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.297359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.297485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.297512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.297695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.297722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.297883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.297912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.298074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.298105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.298252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.298285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.298412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.298439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.298560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.298587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.298758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.298785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.298907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.298934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.299083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.299110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.299261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.299288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.299404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.299431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.299610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.299637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.299767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.299794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.299956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.299986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.300126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.300157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.300333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.300361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.300484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.300511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.300661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.300688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.300805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.300832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.300965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.301008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.301148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.301178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.301368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.301395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.301519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.301547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.301728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.301755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.301885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.184 [2024-11-02 11:47:15.301912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.184 qpair failed and we were unable to recover it. 00:35:15.184 [2024-11-02 11:47:15.302038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.302067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.302246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.302445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.302472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.302670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.302699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.302847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.302896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.303035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.303062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.303200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.303227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.303401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.303441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.303622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.303651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.303776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.303803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.303957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.304000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.304233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.304271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.304401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.304428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.304581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.304624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.304787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.304814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.304979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.305008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.305129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.305161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.305334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.305361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.305485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.305512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.305658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.305686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.305857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.305884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.306089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.306116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.306269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.306298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.306465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.306493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.306644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.306671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.306819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.306846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.306998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.307025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.307210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.307236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.307372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.307399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.307555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.307582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.307752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.307782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.307978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.308007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.308158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.308185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.308309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.308337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.185 [2024-11-02 11:47:15.308464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.185 [2024-11-02 11:47:15.308492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.185 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.308636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.308663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.308804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.308830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.308958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.308987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.309110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.309137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.309287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.309543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.309570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.309711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.309739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.309873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.309903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.310035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.310065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.310235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.310271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.310398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.310425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.310569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.310597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.310749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.310781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.310920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.310950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.311109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.311138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.311278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.311305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.311528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.311555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.311735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.311766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.311910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.311937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.312086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.312113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.312273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.312301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.312426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.312452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.312576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.312603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.312781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.312811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.312964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.312990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.313216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.313246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.313442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.313470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.313589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.313616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.313766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.313792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.313963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.313989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.314140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.314167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.314351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.314379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.314524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.314569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.314708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.314734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.314861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.314887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.315013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.315040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.315285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.315313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.315485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.186 [2024-11-02 11:47:15.315512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.186 qpair failed and we were unable to recover it. 00:35:15.186 [2024-11-02 11:47:15.315687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.315716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.315876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.315902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.316076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.316103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.316280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.316310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.316499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.316526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.316720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.316746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.316886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.316912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.317078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.317263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.317291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.317440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.317466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.317639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.317666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.317859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.317889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.318030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.318060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.318228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.318264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.318409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.318439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.318645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.318674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.318843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.318869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.319052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.319235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.319274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.319444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.319470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.319596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.319622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.319794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.319821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.320004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.320180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.320225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.320392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.320418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.320573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.320600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.320779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.320805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.321016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.321043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.321175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.321201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.321330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.321359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.321506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.321532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.321676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.321703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.321850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.321880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.322051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.322078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.322231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.322264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.322405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.322431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.322548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.322576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.322771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.322797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.187 [2024-11-02 11:47:15.322936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.187 [2024-11-02 11:47:15.322966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.187 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.323128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.323158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.323330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.323357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.323492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.323519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.323671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.323714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.323871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.323902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.324062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.324091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.324242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.324276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.324450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.324477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.324614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.324644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.324816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.324845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.325014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.325042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.325239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.325278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.325412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.325442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.325619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.325646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.325794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.325820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.325942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.325990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.326163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.326191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.326358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.326389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.326526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.326556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.326726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.326752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.326936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.326966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.327111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.327140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.327308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.327335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.327524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.327553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.327717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.327754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.327899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.328053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.328081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.328237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.328281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.328450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.328477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.328651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.328681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.328868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.328898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.329057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.329083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.329204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.329249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.329449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.329480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.329624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.329652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.329808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.329836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.329989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.330016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.330201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.188 [2024-11-02 11:47:15.330227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.188 qpair failed and we were unable to recover it. 00:35:15.188 [2024-11-02 11:47:15.330417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.330446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.330663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.330703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.330863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.330891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.331009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.331055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.331231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.331268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.331396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.331423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.331562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.331605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.331728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.331756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.331947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.331975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.332090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.332116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.332272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.332301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.332471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.332498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.332670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.332700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.332863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.332891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.333065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.333091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.333216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.333243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.333467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.333495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.333647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.333689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.333847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.333902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.334050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.334080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.334269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.334297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.334426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.334453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.334599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.334628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.334832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.334859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.335032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.335062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.335200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.335230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.335398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.335425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.335547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.335591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.335755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.335785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.335927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.335953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.336105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.336149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.336294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.336325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.336473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.336499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.336647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.189 [2024-11-02 11:47:15.336691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.189 qpair failed and we were unable to recover it. 00:35:15.189 [2024-11-02 11:47:15.336865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.336893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.337044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.337071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.337244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.337281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.337430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.337459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.337641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.337668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.337819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.337846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.338032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.338062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.338241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.338274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.338469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.338498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.338626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.338656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.338800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.338827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.339017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.339046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.339181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.339210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.339386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.339414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.339567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.339594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.339852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.339899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.340076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.340101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.340222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.340249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.340389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.340417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.340567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.340593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.340782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.340811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.340976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.341007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.341175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.341203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.341352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.341383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.341502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.341528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.341678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.341705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.341874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.341903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.342045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.342074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.342228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.342263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.342392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.342416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.342540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.342564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.342713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.342740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.342912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.342939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.343168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.343213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.343412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.343439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.343613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.343643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.343839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.343866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.344036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.344063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.190 qpair failed and we were unable to recover it. 00:35:15.190 [2024-11-02 11:47:15.344179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.190 [2024-11-02 11:47:15.344206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.344374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.344414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.344547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.344576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.344722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.344767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.344934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.344963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.345153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.345180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.345314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.345342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.345496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.345523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.345704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.345731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.345842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.345868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.346024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.346055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.346198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.346225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.346363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.346395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.346518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.346562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.346737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.346764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.346911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.346938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.347061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.347091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.347245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.347280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.347434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.347461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.347633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.347664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.347894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.347922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.348063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.348093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.348271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.348300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.348455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.348482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.348612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.348639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.348785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.348992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.349139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.349297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.349452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.349598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.349774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.349947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.349974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.350115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.350145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.350346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.350393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.350537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.350564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.350713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.350740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.350862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.350890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.351035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.191 [2024-11-02 11:47:15.351062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.191 qpair failed and we were unable to recover it. 00:35:15.191 [2024-11-02 11:47:15.351224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.351267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.351442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.351471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.351636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.351663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.351845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.351872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.352013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.352040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.352191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.352218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.352332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.352358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.352507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.352551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.352713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.352739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.352905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.352935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.353130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.353159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.353306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.353333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.353450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.353477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.353654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.353685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.353863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.353890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.354058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.354089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.354218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.354249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.354395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.354422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.354548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.354576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.354765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.354798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.354975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.355002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.355175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.355217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.355373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.355401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.355530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.355557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.355734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.355762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.355931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.355977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.356147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.356174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.356293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.356321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.356469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.356505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.356616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.356642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.356791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.356819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.356968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.356996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.357127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.357154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.357303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.357331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.357482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.357508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.357725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.357751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.357890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.357920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.358116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.358143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.192 [2024-11-02 11:47:15.358269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.192 [2024-11-02 11:47:15.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.192 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.358448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.358477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.358661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.358706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.358860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.358887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.359022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.359075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.359251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.359284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.359440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.359467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.359594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.359638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.359838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.359865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.360038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.360084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.360220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.360250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.360427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.360454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.360627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.360655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.360796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.360826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.360987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.361017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.361246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.361287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.361425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.361457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.361612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.361640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.361758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.361785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.361900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.361927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.362122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.362152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.362324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.362351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.362470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.362497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.362678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.362707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.362853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.362880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.363035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.363062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.363204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.363244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.363431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.363461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.363587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.363615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.363772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.363801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.363985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.364012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.364133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.364160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.364313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.364342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.364492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.364530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.364699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.364728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.364934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.364981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.365120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.365147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.365301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.365328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.365445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.193 [2024-11-02 11:47:15.365472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.193 qpair failed and we were unable to recover it. 00:35:15.193 [2024-11-02 11:47:15.365618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.365645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.365759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.365803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.365999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.366026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.366174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.366204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.366329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.366359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.366515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.366543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.366687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.366719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.366872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.366902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.367071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.367101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.367252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.367287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.367403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.367431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.367579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.367606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.367748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.367775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.367925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.367952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.368097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.368127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.368269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.368308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.368448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.368474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.368609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.368638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.368841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.368868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.369033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.369062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.369249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.369285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.369445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.369472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.369596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.369642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.369807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.369836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.369979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.370006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.370157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.370184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.370334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.370361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.370479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.370506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.370640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.370684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.370825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.370854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.371026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.371054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.371176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.371207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.371403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.371431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.371613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.371640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.194 [2024-11-02 11:47:15.371824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.194 [2024-11-02 11:47:15.371854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.194 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.372027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.372054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.372225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.372252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.372408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.372435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.372610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.372640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.372837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.372863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.372991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.373034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.373222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.373252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.373402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.373428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.373580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.373607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.373733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.373760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.373913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.373940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.374092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.374120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.374249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.374283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.374437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.374463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.374601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.374631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.374790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.374819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.374966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.374993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.375143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.375365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.375392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.375535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.375562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.375681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.375709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.375863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.375889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.376010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.376037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.376184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.376232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.376390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.376416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.376561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.376588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.376717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.376762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.376893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.376922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.377119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.377145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.377268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.377308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.377428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.377455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.377615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.377641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.377789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.377816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.377933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.377960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.378096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.378124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.378246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.378311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.378456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.378483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.378630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.378656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.195 [2024-11-02 11:47:15.378845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.195 [2024-11-02 11:47:15.378874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.195 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.379036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.379067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.379239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.379273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.379452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.379478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.379645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.379674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.379847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.379874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.380024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.380053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.380216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.380245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.380400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.380428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.380551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.380578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.380742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.380771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.380932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.380958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.381082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.381113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.381224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.381251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.381380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.381407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.381529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.381557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.381697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.381727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.381861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.381890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.382041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.382205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.382245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.382383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.382412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.382593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.382624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.382747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.382775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.382930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.382959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.383110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.383265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.383292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.383424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.383451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.383577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.383604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.383791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.383818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.383989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.384016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.384158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.384188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.384338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.384366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.384487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.384514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.384669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.384695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.384918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.384969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.385140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.385166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.385314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.385343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.385476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.385502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.385679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.196 [2024-11-02 11:47:15.385705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.196 qpair failed and we were unable to recover it. 00:35:15.196 [2024-11-02 11:47:15.385853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.385884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.386082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.386111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.386282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.386309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.386454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.386479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.386619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.386650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.386828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.386854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.387007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.387033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.387200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.387231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.387390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.387417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.387569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.387595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.387739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.387765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.387943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.387969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.388094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.388120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.388292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.388319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.388474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.388500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.388666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.388696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.388843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.388869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.389019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.389044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.389206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.389233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.389413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.389454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.389615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.389643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.389791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.389821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.390011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.390041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.390177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.390204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.390354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.390382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.390530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.390575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.390750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.390776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.390954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.390995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.391155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.391185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.391316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.391345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.391472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.391499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.391653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.391682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.391794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.391821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.391998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.392026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.392182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.392373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.392401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.392529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.392714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.392741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.197 [2024-11-02 11:47:15.392903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.197 [2024-11-02 11:47:15.392930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.197 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.393102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.393129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.393272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.393306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.393486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.393513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.393653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.393681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.393808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.393836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.393990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.394017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.394169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.394198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.394378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.394407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.394529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.394557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.394737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.394765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.394944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.394972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.395149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.395176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.395329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.395357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.395510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.395538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.395691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.395718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.395881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.395910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.396091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.396272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.396301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.396411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.396439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.396597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.396624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.396799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.396827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.396981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.397009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.397138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.397166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.397318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.397347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.397511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.397539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.397720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.397749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.397887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.397928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.398137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.398183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.398351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.398391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.398547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.198 [2024-11-02 11:47:15.398576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.198 qpair failed and we were unable to recover it. 00:35:15.198 [2024-11-02 11:47:15.398721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.398750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.398998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.399050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.399249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.399295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.399468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.399494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.399696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.399724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.399860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.399890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.400116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.400172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.400378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.400405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.400573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.400602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.400789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.400818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.401044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.401072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.401318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.401351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.401498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.401524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.401716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.401745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.401911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.401967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.402131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.402162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.402337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.402364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.402489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.402515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.402694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.402725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.402948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.402978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.403170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.403199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.403377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.403405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.403550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.403577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.403763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.403820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.403987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.404017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.404220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.404248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.404408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.404435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.404577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.404603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.404771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.404797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.404929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.405137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.405163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.405334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.405362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.405553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.405583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.405717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.405747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.405923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.405950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.406130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.406156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.406306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.406351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.406528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.406555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.199 [2024-11-02 11:47:15.406683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.199 [2024-11-02 11:47:15.406710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.199 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.406835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.406862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.407007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.407033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.407200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.407241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.407409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.407437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.407589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.407617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.407762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.407790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.407966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.407993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.408168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.408196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.408341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.408369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.408525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.408571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.408743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.408769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.408931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.408957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.409161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.409195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.409388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.409415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.409607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.409636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.409828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.409857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.410053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.410078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.410200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.410225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.410360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.410388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.410549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.410576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.410769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.410799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.410992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.411019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.411192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.411218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.411362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.411403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.411589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.411617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.411775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.411803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.411960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.411988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.412108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.412137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.412280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.412309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.412431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.412459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.412639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.412666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.412811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.412838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.412988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.413016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.413169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.413196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.413322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.413351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.413531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.413559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.413715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.413742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.413862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.200 [2024-11-02 11:47:15.413891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.200 qpair failed and we were unable to recover it. 00:35:15.200 [2024-11-02 11:47:15.414065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.414094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.414270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.414314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.414465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.414493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.414664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.414691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.414842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.414869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.415013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.415041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.415163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.415190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.415365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.415393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.415543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.415570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.415745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.415773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.415948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.415976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.416132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.416160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.416283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.416312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.416434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.416461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.416605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.416637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.416773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.416800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.416952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.416979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.417130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.417188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.417343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.417373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.417521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.417549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.417746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.417792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.417978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.418014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.418196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.418223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.418376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.418404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.418574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.418602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.418780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.418827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.419028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.419073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.419250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.419284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.419467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.419496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.419641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.419687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.419952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.420002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.420150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.420176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.420356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.420384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.420531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.420576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.420775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.420820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.421044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.421071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.421220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.421247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.421433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.421461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.201 qpair failed and we were unable to recover it. 00:35:15.201 [2024-11-02 11:47:15.421576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.201 [2024-11-02 11:47:15.421603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.421747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.421775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.421943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.421971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.422120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.422152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.422329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.422357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.422500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.422527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.422671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.422698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.422870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.422898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.423050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.423076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.423231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.423264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.423465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.423508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.423657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.423702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.423878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.423927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.424074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.424101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.424222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.424248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.424445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.424474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.424651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.424678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.424808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.424837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.424958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.424985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.425142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.425170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.425320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.425348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.425523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.425551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.425701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.425729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.425917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.425964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.426140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.426167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.426303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.426333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.426500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.426527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.426702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.426749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.426893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.426938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.427091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.427119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.427301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.427330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.427531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.427560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.427777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.427822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.427977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.428003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.428152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.428327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.428372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.428574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.428617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.428775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.428819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.202 [2024-11-02 11:47:15.428994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.202 [2024-11-02 11:47:15.429020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.202 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.429144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.429171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.429357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.429403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.429540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.429585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.429780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.429972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.430144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.430325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.430471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.430622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.430783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.430956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.430983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.431160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.431186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.431363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.431406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.431564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.431591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.431763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.431807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.431981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.432008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.432163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.432189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.432323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.432367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.432573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.432617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.432818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.432864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.433010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.433038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.433214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.433241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.433414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.433458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.433642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.433686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.433969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.434018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.434197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.434224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.434436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.434482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.434653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.434698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.434872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.434918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.435067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.435095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.435267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.435296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.435452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.435495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.435693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.435737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.435906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.203 [2024-11-02 11:47:15.435954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.203 qpair failed and we were unable to recover it. 00:35:15.203 [2024-11-02 11:47:15.436101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.436127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.436279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.436308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.436505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.436536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.436727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.436772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.436982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.437009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.437184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.437211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.437389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.437434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.437583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.437627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.437797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.437841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.438000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.438027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.438178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.438208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.438359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.438407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.438582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.438627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.438823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.438853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.439021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.439047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.439220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.439246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.439440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.439468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.439665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.439711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.439883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.439928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.440099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.440126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.440331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.440377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.440553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.440601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.440783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.440812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.440961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.440988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.441141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.441167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.441322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.441349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.441480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.441508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.441679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.441722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.441875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.441902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.442025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.442052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.442195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.442222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.442374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.442422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.442604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.442648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.442843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.442889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.443073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.443101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.443252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.443284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.443448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.443492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.443696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.443740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.204 qpair failed and we were unable to recover it. 00:35:15.204 [2024-11-02 11:47:15.443900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.204 [2024-11-02 11:47:15.443946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.444095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.444122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.444277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.444304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.444481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.444526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.444695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.444725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.444944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.444990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.445131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.445157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.445281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.445308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.445478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.445532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.445736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.445780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.445930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.446125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.446154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.446321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.446371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.446536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.446583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.446726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.446771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.446925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.446952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.447099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.447126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.447285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.447312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.447432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.447459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.447586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.447613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.447758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.447787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.447973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.448000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.448146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.448173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.448319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.448364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.448563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.448608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.448730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.448758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.448938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.448965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.449090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.449318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.449364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.449571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.449615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.449811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.449857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.450013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.450041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.450188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.450215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.450366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.450411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.450593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.450620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.450775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.450813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.450966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.450993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.451144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.451170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.451315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.205 [2024-11-02 11:47:15.451343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.205 qpair failed and we were unable to recover it. 00:35:15.205 [2024-11-02 11:47:15.451500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.451528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.451676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.451704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.451855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.451882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.452058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.452086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.452241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.452421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.452448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.452599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.452626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.452792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.452838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.452991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.453018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.453189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.453217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.453424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.453469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.453651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.453679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.453851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.453878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.454001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.454033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.454186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.454212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.454372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.454417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.454567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.454611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.454782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.454828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.454958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.454984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.455158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.455185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.455355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.455400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.455603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.455633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.455819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.455864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.456015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.456042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.456173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.456200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.456372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.456418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.456588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.456639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.456847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.456892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.457040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.457077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.457226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.457253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.457427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.457647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.457695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.457866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.457916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.458073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.458242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.458274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.458447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.458491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.458634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.458680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.458856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.458900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.206 qpair failed and we were unable to recover it. 00:35:15.206 [2024-11-02 11:47:15.459048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.206 [2024-11-02 11:47:15.459075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.459224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.459252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.459410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.459437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.459613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.459640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.459792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.459819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.459962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.459990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.460138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.460165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.460336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.460549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.460593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.460737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.460781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.460956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.460982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.461135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.461162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.461341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.461368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.461521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.461549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.461721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.461771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.461944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.461976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.462128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.462155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.462268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.462296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.462428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.462474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.462639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.462683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.462827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.462854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.463000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.463028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.463176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.463204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.463405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.463450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.463653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.463697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.463896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.463941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.464090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.464117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.464269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.464297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.464471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.464516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.464690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.464738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.464885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.464912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.465038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.465066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.465218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.465250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.465461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.465504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.207 qpair failed and we were unable to recover it. 00:35:15.207 [2024-11-02 11:47:15.465706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.207 [2024-11-02 11:47:15.465751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.465898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.465926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.466049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.466077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.466226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.466262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.466416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.466444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.466567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.466594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.466767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.466795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.466947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.467160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.467187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.467356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.467402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.467552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.467599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.467799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.467844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.467978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.468007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.468175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.468204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.468404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.468450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.468649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.468693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.468863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.468906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.469033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.469061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.469204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.469232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.469370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.469398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.469571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.469616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.469793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.469846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.469993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.470021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.470158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.470190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.470369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.470414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.470610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.470656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.470839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.470867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.470984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.471012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.471162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.471189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.471333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.471378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.471523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.471566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.471730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.471773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.471922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.471949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.472101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.472127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.472277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.472304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.472483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.472526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.472682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.472710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.472884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.473037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.208 [2024-11-02 11:47:15.473064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.208 qpair failed and we were unable to recover it. 00:35:15.208 [2024-11-02 11:47:15.473181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.473208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.473425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.473469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.473639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.473684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.473856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.473901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.474053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.474081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.474232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.474264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.474449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.474476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.474628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.474658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.474829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.474875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.475057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.475085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.475244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.475280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.475454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.475498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.475650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.475695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.475868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.475918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.476058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.476085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.476212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.476239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.476405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.476432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.476578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.476621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.476800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.476842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.476986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.477012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.477183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.477211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.477340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.477366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.477524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.477574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.477775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.477819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.478002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.478030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.478158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.478186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.478325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.478372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.478567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.478610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.478753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.478795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.478938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.478966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.479096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.479123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.479274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.479302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.479475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.479521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.479721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.479766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.479918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.479945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.480119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.480146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.480281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.480309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.480481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.209 [2024-11-02 11:47:15.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.209 qpair failed and we were unable to recover it. 00:35:15.209 [2024-11-02 11:47:15.480691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.480734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.480919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.480945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.481096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.481123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.481272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.481299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.481413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.481586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.481633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.481785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.481812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.481985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.482011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.482160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.482188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.482354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.482398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.482528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.482574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.482797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.482841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.483016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.483048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.483222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.483253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.483410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.483440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.483603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.483632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.483792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.483824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.484076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.484128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.484339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.484367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.484531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.484561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.484775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.484839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.485029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.485059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.485222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.485400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.485427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.485553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.485604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.485795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.485824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.486007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.486037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.486225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.486254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.486436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.486463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.486635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.486666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.486827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.486857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.487018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.487047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.487211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.487383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.487410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.487582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.487610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.487793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.487819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.487967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.487994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.488169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.488196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.210 qpair failed and we were unable to recover it. 00:35:15.210 [2024-11-02 11:47:15.488351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.210 [2024-11-02 11:47:15.488379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.488573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.488602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.488855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.488905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.489042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.489072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.489234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.489273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.489476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.489503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.489714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.489743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.489921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.489951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.490150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.490180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.490329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.490356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.490510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.490554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.490725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.490752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.490900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.490929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.491086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.491117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.491285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.491315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.491489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.491516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.491695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.491722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.491921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.491950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.492114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.492143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.492269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.492313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.492498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.492524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.492698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.492728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.492886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.492916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.493083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.493112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.493247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.493299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.493416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.493443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.493585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.493619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.493764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.493812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.493976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.494009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.494207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.494235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.494400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.494428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.494590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.494620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.494784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.494813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.494982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.495013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.495184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.495395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.495423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.495611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.495640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.495797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.495827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.496082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.211 [2024-11-02 11:47:15.496112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.211 qpair failed and we were unable to recover it. 00:35:15.211 [2024-11-02 11:47:15.496245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.496284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.496467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.496494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.496732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.496761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.496945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.496986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.497146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.497176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.497365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.497393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.497560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.497589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.497744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.497774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.497986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.498027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.498217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.498246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.498447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.498474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.498597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.498624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.498740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.498782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.498971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.499000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.499262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.499307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.499433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.499460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.499672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.499702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.499864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.499894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.500055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.500086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.500275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.500320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.500436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.500462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.500611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.500650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.500774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.500819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.501007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.501036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.501209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.501236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.501395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.501422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.501549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.501578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.501740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.501775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.501967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.502181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.502222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.502349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.502378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.502630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.502681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.502920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.502974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.212 [2024-11-02 11:47:15.503119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.212 [2024-11-02 11:47:15.503146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.212 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.503343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.503509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.503536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.503712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.503739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.503874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.503918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.504091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.504118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.504282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.504319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.504496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.504541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.504867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.504927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.505095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.505122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.505280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.505308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.505451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.505496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.505674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.505729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.505945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.505974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.506127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.506155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.506320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.506348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.506522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.506549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.506767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.506795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.506912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.506940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.507089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.507117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.507306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.507337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.507584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.507628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.507945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.508015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.508325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.508353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.508540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.508570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.508760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.508789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.509105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.509162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.509310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.509337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.509482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.509508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.509681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.509710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.509987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.510040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.510205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.510234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.510412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.510439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.510586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.510675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.510952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.511005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.511216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.511243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.511406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.511432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.511581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.511610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.511772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.511802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.512071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.512125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.512329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.512356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.213 [2024-11-02 11:47:15.512484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.213 [2024-11-02 11:47:15.512511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.213 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.512713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.512742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.512905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.512936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.513102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.513132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.513335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.513362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.513508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.513550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.513681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.513710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.513902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.513936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.514127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.514156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.514338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.514365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.514516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.514542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.514682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.514711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.514874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.514903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.515045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.515087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.515295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.515350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.515485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.515515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.515689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.515870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.515897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.516036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.516068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.516279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.516321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.516497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.516525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.516683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.516711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.516972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.517025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.517212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.517243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.517431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.517459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.517610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.517637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.517810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.517837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.517964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.517991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.518168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.518387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.518417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.518550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.518578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.518780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.518812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.518977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.519009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.519207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.519235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.519409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.519450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.519625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.519666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.519883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.519941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.520116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.520148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.520286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.520321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.520500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.520545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.520724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.520754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.521008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.521038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.214 qpair failed and we were unable to recover it. 00:35:15.214 [2024-11-02 11:47:15.521175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.214 [2024-11-02 11:47:15.521207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.521375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.521403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.521544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.521575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.521790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.521824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.522097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.522150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.522318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.522354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.522483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.522510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.522853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.522906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.523215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.523277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.523453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.523479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.523659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.524039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.524091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.524249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.524284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.524490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.524517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.524691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.524717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.524904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.524963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.525108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.525138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.525345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.525373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.525514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.525541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.525695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.525739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.525901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.525931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.526095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.526124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.526302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.526345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.526468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.526495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.526666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.526696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.527008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.527063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.527232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.527265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.527422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.527449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.527633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.527700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.528012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.528075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.528247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.528280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.528428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.528454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.528602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.528628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.528811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.528841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.529086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.529139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.529284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.529311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.529461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.529488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.529749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.529798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.530080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.530132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.530318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.530345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.530467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.530493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.530667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.530694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.215 qpair failed and we were unable to recover it. 00:35:15.215 [2024-11-02 11:47:15.530844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.215 [2024-11-02 11:47:15.530871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.530997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.531024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.531200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.531229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.531400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.531441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.531633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.531674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.531979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.532046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.532206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.532398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.532425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.532582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.532609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.532810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.532856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.533014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.533042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.533195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.533222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.533391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.533436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.533604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.533636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.533797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.533827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.534069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.534299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.534327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.534481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.534508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.534805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.534857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.535072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.535132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.535344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.535371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.535523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.535551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.535719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.535748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.536030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.536083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.536267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.536457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.536483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.536624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.536653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.536833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.536883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.537154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.537205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.537406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.537433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.537560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.537586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.537703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.537746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.537999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.538250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.538285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.538437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.538464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.538606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.538635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.538819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.538876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.539157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.216 [2024-11-02 11:47:15.539228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.216 qpair failed and we were unable to recover it. 00:35:15.216 [2024-11-02 11:47:15.539402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.539437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.539612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.539670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.539918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.539971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.540136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.540166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.540343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.540372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.540551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.540598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.540752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.540797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.541004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.541050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.541217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.541268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.541427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.541459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.541627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.541659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.541823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.541855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.542016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.542047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.217 [2024-11-02 11:47:15.542183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.217 [2024-11-02 11:47:15.542213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.217 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.542397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.542426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.542604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.542634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.542813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.542845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.543029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.543059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.543247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.543286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.543439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.543659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.543693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.543868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.543911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.544223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.544287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.544487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.544514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.544706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.544735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.544963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.545015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.545200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.545227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.545411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.545440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.545657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.545697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.545841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.545866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.546019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.546045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.546193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.546220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.546421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.546452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.546659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.546708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.546971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.547193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.547221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.547387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.547418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.547611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.547834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.547894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.548057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.548085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.548262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.548305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.548472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.548504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.548737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.548788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.549102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.549157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.549341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.549370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.514 qpair failed and we were unable to recover it. 00:35:15.514 [2024-11-02 11:47:15.549627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.514 [2024-11-02 11:47:15.549656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.549857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.549886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.550060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.550087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.550206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.550233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.550411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.550441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.550644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.550674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.550903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.550961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.551129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.551156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.551280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.551326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.551512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.551541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.551697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.551726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.551918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.551945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.552058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.552084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.552228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.552263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.552391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.552418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.552569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.552596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.552743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.552769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.552915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.552942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.553088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.553115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.553278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.553331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.553484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.553510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.553701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.553730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.553930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.553959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.554157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.554186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.554374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.554405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.554560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.554589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.554744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.554773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.554942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.554969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.555113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.555145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.555272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.555325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.555469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.555495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.555621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.555647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.555801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.555828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.556002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.556028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.556147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.556174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.556298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.556325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.556497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.556523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.556672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.515 [2024-11-02 11:47:15.556698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.515 qpair failed and we were unable to recover it. 00:35:15.515 [2024-11-02 11:47:15.556817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.556843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.557020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.557050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.557302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.557345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.557471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.557498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.557656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.557683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.557930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.557981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.558214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.558243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.558400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.558431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.558603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.558636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.558883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.558915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.559181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.559397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.559437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.559608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.559640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.559814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.559841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.559992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.560019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.560177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.560207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.560417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.560448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.560629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.560659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.560871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.560911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.561093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.561122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.561277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.561307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.561452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.561498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.561825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.561875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.562028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.562055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.562231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.562264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.562412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.562456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.562636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.562680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.562899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.562927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.563080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.563107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.563261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.563289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.563425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.563475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.563686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.563730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.563904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.563931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.564056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.564082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.564228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.564260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.564412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.564439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.564613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.564638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.564790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.564816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.516 qpair failed and we were unable to recover it. 00:35:15.516 [2024-11-02 11:47:15.564933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.516 [2024-11-02 11:47:15.564959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.565100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.565126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.565282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.565321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.565519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.565548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.565820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.565869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.565997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.566023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.566204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.566230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.566417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.566462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.566634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.566681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.566851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.566894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.567067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.567093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.567269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.567306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.567508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.567538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.567725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.567769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.567937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.567979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.568130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.568155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.568321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.568367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.568571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.568616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.568785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.568833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.568988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.569015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.569167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.569193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.569372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.569399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.569540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.569586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.569727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.569770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.569968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.570012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.570164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.570190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.570384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.570433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.570608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.570652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.570972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.571021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.571197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.571223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.571406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.571451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.571609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.571653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.571823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.571856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.572039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.572065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.572185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.572211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.572359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.572613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.572642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.572777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.572804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.517 [2024-11-02 11:47:15.572958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.517 [2024-11-02 11:47:15.572984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.517 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.573131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.573157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.573335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.573379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.573521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.573552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.573693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.573724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.573957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.573983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.574158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.574184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.574358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.574385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.574560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.574589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.574732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.574763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.574959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.574988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.575201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.575230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.575433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.575471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.575652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.575685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.576011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.576063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.576265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.576293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.576473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.576503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.576801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.576851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.577126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.577175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.577369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.577399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.577566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.577592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.577743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.577792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.577941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.577968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.578117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.578142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.578340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.578385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.578557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.578600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.578778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.578804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.578955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.578983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.579128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.579153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.579315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.579344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.579526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.579553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.579709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.579735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.518 qpair failed and we were unable to recover it. 00:35:15.518 [2024-11-02 11:47:15.579878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.518 [2024-11-02 11:47:15.579920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.580072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.580100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.580278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.580313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.580457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.580504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.580683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.580726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.580894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.580940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.581099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.581138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.581337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.581370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.581545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.581575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.581737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.581768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.581930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.581961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.582126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.582155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.582309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.582337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.582485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.582535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.582673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.582702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.582873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.582916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.583065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.583091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.583228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.583254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.583408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.583456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.583686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.583735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.583907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.583956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.584106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.584132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.584281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.584319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.584491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.584534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.584851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.584900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.585048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.585075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.585225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.585250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.585429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.585477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.585762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.585814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.585990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.586034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.586181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.586208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.586391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.586435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.586609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.586654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.586821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.586864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.587020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.587046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.587194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.587220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.587423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.587466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.587610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.587822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.519 [2024-11-02 11:47:15.587865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.519 qpair failed and we were unable to recover it. 00:35:15.519 [2024-11-02 11:47:15.588036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.588063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.588234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.588268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.588464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.588508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.588668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.588752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.589002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.589027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.589176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.589202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.589321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.589347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.589541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.589586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.589745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.589788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.589952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.589994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.590148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.590174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.590344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.590392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.590591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.590635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.590801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.590845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.590991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.591168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.591194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.591339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.591382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.591559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.591604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.591920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.591969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.592119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.592274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.592300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.592497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.592544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.592720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.592763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.592935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.592961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.593134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.593159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.593334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.593378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.593544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.593785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.593848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.593997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.594022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.594165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.594190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.594401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.594446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.594634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.594666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.594826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.594856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.595019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.595048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.595194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.595221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.595383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.595411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.595586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.595617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.595749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.520 [2024-11-02 11:47:15.595779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.520 qpair failed and we were unable to recover it. 00:35:15.520 [2024-11-02 11:47:15.595946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.595976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.596140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.596168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.596315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.596342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.596467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.596493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.596635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.596680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.596944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.596998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.597170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.597196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.597354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.597404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.597549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.597580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.597787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.597816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.598022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.598052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.598244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.598284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.598483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.598509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.598747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.598778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.598963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.599155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.599184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.599323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.599535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.599580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.599745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.599774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.600005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.600033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.600229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.600268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.600464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.600489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.600636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.600662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.600822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.600851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.601028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.601056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.601226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.601251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.601422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.601448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.601624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.601652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.601817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.601846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.602026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.602068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.602230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.602267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.602447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.602473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.602682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.602720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.602926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.602957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.603134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.603164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.603347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.603378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.603596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.603622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.603902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.521 [2024-11-02 11:47:15.603952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.521 qpair failed and we were unable to recover it. 00:35:15.521 [2024-11-02 11:47:15.604135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.604164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.604297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.604324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.604520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.604549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.604719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.604977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.605006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.605177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.605207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.605368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.605409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.605565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.605596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.605967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.606020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.606222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.606248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.606399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.606426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.606554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.606581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.606730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.606757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.606922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.606951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.607170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.607199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.607385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.607411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.607567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.607593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.607765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.607794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.608021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.608051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.608239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.608276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.608466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.608770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.608822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.609039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.609068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.609247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.609278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.609458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.609487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.609668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.609699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.610031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.610079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.610242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.610274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.610447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.610476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.610803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.610853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.611004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.611030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.611180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.611206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.611362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.611388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.611533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.611559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.611732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.611772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.611959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.611986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.612159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.612379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.612407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.522 qpair failed and we were unable to recover it. 00:35:15.522 [2024-11-02 11:47:15.612541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.522 [2024-11-02 11:47:15.612566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.612691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.612717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.612844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.612870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.613032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.613061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.613303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.613329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.613512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.613538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.613740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.613769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.613958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.614020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.614203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.614232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.614399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.614428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.614629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.614658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.614829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.614857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.615050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.615076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.615202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.615227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.615429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.615458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.615678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.615707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.615905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.615933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.616065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.616090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.616268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.616311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.616501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.616529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.616752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.616780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.616947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.616973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.617148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.617174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.617327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.617357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.617534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.617560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.617709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.617735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.617891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.617917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.618158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.618186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.618351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.618377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.618490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.618516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.618692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.618717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.618836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.618862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.619007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.619033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.619209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.523 [2024-11-02 11:47:15.619438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.523 [2024-11-02 11:47:15.619467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.523 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.619625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.619767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.619793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.619986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.620013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.620162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.620187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.620356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.620395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.620617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.620645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.620871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.620928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.621102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.621130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.621316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.621346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.621502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.621530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.621815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.621902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.622065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.622091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.622266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.622292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.622449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.622475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.622662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.622688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.622852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.622887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.623117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.623146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.623334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.623361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.623501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.623526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.623668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.623708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.623890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.623918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.624070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.624096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.624271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.624298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.624525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.624569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.624747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.624790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.624988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.625017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.625215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.625241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.625403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.625430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.625568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.625612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.625816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.625859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.626031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.626075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.626197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.626225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.626382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.626426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.626568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.626612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.626810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.626853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.627029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.627055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.627233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.627271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.627447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.627491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.524 qpair failed and we were unable to recover it. 00:35:15.524 [2024-11-02 11:47:15.627687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.524 [2024-11-02 11:47:15.627716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.627901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.627946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.628070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.628096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.628272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.628299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.628475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.628519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.628636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.628664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.628835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.628879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.629029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.629056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.629178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.629205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.629402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.629446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.629592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.629636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.629808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.629853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.630003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.630030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.630156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.630183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.630355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.630400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.630559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.630601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.630776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.630822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.630966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.630997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.631175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.631202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.631371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.631416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.631615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.631659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.631830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.631874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.632020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.632048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.632192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.632219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.632403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.632447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.632609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.632651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.632846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.632876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.633040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.633067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.633223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.633249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.633442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.633489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.633635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.633682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.633849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.633893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.634070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.634275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.634303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.634470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.634516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.634686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.634729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.634911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.634954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.635112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.525 [2024-11-02 11:47:15.635140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.525 qpair failed and we were unable to recover it. 00:35:15.525 [2024-11-02 11:47:15.635284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.635312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.635452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.635481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.635618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.635647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.635844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.635873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.636037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.636065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.636237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.636275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.636435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.636465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.636631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.636660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.636825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.636855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.637051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.637079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.637208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.637234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.637402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.637428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.637597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.637626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.637793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.637823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.637991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.638019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.638184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.638214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.638415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.638454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.638588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.638616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.638789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.638835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.639007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.639052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.639210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.639236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.639366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.639393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.639553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.639582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.639802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.639845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.639984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.640028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.640177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.640205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.640377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.640407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.640593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.640622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.640778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.640808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.641001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.641030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.641172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.641198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.641323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.641351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.641478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.641504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.641708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.641737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.641927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.641956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.642123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.642151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.642365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.642392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.642580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.642609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.526 qpair failed and we were unable to recover it. 00:35:15.526 [2024-11-02 11:47:15.642769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.526 [2024-11-02 11:47:15.642798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.642957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.642985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.643156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.643184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.643365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.643391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.643551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.643580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.643741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.643771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.643940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.643968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.644139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.644168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.644331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.644362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.644533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.644562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.644749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.644778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.644977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.645019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.645224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.645252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.645436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.645463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.645596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.645621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.645766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.645796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.645990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.646183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.646211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.646354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.646381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.646529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.646555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.646714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.646755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.646946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.646975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.647167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.647196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.647360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.647387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.647540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.647583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.647721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.647751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.647940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.647969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.648139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.648165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.648347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.648374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.648523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.648549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.648714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.648743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.648876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.648905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.649093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.649122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.649302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.527 [2024-11-02 11:47:15.649329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.527 qpair failed and we were unable to recover it. 00:35:15.527 [2024-11-02 11:47:15.649448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.649476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.649633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.649659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.649829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.649859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.650023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.650052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.650198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.650223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.650383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.650409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.650605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.650634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.650824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.650852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.651047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.651076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.651242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.651277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.651454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.651481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.651685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.651714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.651874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.651902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.652121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.652150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.652356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.652387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.652549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.652579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.652742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.652771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.652943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.652971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.653128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.653157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.653286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.653313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.653435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.653461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.653684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.653895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.653924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.654086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.654115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.654245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.654280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.654497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.654522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.654690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.654719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.654903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.654933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.655096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.655298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.655325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.655470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.655500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.655717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.655746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.655923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.655952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.656168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.656208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.656379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.656409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.656640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.656669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.656831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.656860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.657022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.528 [2024-11-02 11:47:15.657053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.528 qpair failed and we were unable to recover it. 00:35:15.528 [2024-11-02 11:47:15.657263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.657439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.657466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.657663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.657704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.657872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.657899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.658110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.658135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.658327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.658355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.658539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.658565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.658710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.658737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.658953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.658979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.659146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.659175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.659366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.659393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.659585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.659610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.659766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.659792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.659912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.659938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.660228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.660254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.660423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.660449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.660632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.660663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.660837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.660863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.661060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.661089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.661252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.661306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.661484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.661509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.661636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.661663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.661822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.661847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.662028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.662054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.662225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.662254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.662466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.662491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.662664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.662693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.662871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.662901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.663096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.663125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.663285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.663315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.663474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.663500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.663677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.663703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.663846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.663887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.664041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.664067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.664237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.664273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.664468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.664495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.664615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.664641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.664846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.664871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.529 qpair failed and we were unable to recover it. 00:35:15.529 [2024-11-02 11:47:15.665075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.529 [2024-11-02 11:47:15.665103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.665353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.665379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.665557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.665582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.665744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.665784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.665926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.665953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.666186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.666215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.666391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.666417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.666554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.666583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.666718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.666748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.667006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.667035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.667213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.667242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.667417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.667444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.667573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.667599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.667747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.667773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.667959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.667988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.668197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.668387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.668413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.668549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.668575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.668738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.668767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.668946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.668972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.669111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.669137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.669307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.669334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.669538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.669564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.669724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.669750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.669919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.669948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.670194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.670223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.670472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.670500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.670735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.670785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.671022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.671110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.671367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.671393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.671592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.671620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.671776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.671806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.671972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.672000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.672176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.672205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.672373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.672403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.672601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.672631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.672831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.672859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.673048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.673074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.530 [2024-11-02 11:47:15.673198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.530 [2024-11-02 11:47:15.673224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.530 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.673455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.673481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.673633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.673660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.673832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.673871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.674036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.674211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.674400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.674560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.674726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.674869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.674998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.675139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.675164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.675314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.675341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.675494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.675519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.675683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.675709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.675884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.675910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.676061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.676086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.676281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.676331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.676461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.676488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.676637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.676664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.676802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.676829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.677104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.677131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.677283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.677321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.677516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.677542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.677677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.677719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.677872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.677898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.678045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.678072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.678264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.678314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.678466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.678493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.678639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.678665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.678795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.678821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.679017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.679042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.679207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.679246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.679406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.679433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.679617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.679657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.679839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.679866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.680016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.680043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.680198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.680224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.680418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.531 [2024-11-02 11:47:15.680457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.531 qpair failed and we were unable to recover it. 00:35:15.531 [2024-11-02 11:47:15.680622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.680650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.680808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.680836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.681076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.681101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.681254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.681285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.681436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.681462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.681691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.681717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.681858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.681884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.682038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.682064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.682299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.682345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.682529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.682555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.682718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.682745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.682895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.682921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.683082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.683121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.683280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.683316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.683444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.683470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.683645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.683671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.683818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.683844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.683992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.684018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.684164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.684191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.684344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.684371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.684525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.684550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.684757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.684783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.684939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.684967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.685192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.685233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.685399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.685425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.685553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.685579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.685727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.685753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.685900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.685927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.686082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.686108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.686362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.686387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.686570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.686595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.686777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.686803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.686982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.687009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.687156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.532 [2024-11-02 11:47:15.687184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.532 qpair failed and we were unable to recover it. 00:35:15.532 [2024-11-02 11:47:15.687315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.687342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.687508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.687548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.687706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.687753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.687956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.687982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.688138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.688165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.688280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.688308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.688422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.688449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.688629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.688656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.688825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.688851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.689039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.689081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.689267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.689300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.689421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.689448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.689689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.689715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.689894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.689920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.690093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.690124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.690301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.690328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.690475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.690501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.690634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.690661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.690884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.690926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.691093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.691271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.691305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.691427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.691454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.691606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.691631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.691780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.691806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.691980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.692008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.692162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.692188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.692337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.692365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.692633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.692658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.692850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.692876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.693027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.693055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.693300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.693327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.693486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.693513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.693622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.693648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.693836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.693862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.694030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.694056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.694204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.694232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.694378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.694404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.694615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.694641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.533 [2024-11-02 11:47:15.694791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.533 [2024-11-02 11:47:15.694817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.533 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.695042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.695068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.695237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.695270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.695411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.695450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.695613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.695642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.695833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.695859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.696089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.696116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.696301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.696328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.696476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.696502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.696674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.696700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.696854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.696881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.697107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.697146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.697326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.697355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.697481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.697508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.697721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.697747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.697897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.697923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.698097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.698123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.698297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.698337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.698566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.698606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.698776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.698803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.698996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.699023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.699190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.699217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.699363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.699391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.699586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.699612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.699758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.699783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.699943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.699968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.700121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.700148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.700300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.700327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.700479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.700654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.700680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.700880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.700906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.701065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.701107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.701242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.701274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.701469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.701496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.701641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.701666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.701842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.701868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.702016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.702043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.702305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.702332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.702478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.534 [2024-11-02 11:47:15.702505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.534 qpair failed and we were unable to recover it. 00:35:15.534 [2024-11-02 11:47:15.702637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.702663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.702935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.702961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.703133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.703159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.703318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.703357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.703513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.703560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.703716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.703742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.703883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.703909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.704088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.704114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.704242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.704288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.704424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.704451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.704637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.704664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.704837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.704862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.705038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.705065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.705221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.705248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.705381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.705408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.705530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.705556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.705756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.705781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.705928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.705954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.706106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.706283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.706310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.706536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.706562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.706673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.706699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.706851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.706879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.707060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.707100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.707299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.707325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.707459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.707485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.707625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.707652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.707808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.707834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.707988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.708185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.708212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.708372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.708399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.708591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.708618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.708787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.708827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.708967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.708993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.709233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.709264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.709435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.709461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.709590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.709617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.709831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.709871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.535 [2024-11-02 11:47:15.710032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.535 [2024-11-02 11:47:15.710074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.535 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.710226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.710274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.710428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.710453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.710639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.710665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.710815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.710842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.710986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.711012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.711228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.711279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.711434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.711461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.711622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.711647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.711822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.711848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.711999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.712026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.712230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.712272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.712402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.712429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.712586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.712612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.712854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.712880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.713052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.713079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.713267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.713302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.713455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.713481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.713697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.713738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.713889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.713930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.714120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.714147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.714298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.714341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.714496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.714522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.714663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.714704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.714893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.714918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.715090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.715117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.715240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.715274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.715446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.715472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.715624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.715651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.715847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.715874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.716054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.716079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.716249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.716430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.716458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.716603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.536 [2024-11-02 11:47:15.716640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.536 qpair failed and we were unable to recover it. 00:35:15.536 [2024-11-02 11:47:15.716827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.716854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.717005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.717031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.717196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.717222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.717463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.717490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.717672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.717699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.717821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.717847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.717991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.718016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.718176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.718203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.718353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.718380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.718528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.718553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.718703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.718730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.718908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.718934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.719079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.719123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.719314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.719342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.719467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.719495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.719666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.719692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.719838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.719864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.720009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.720035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.720154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.720180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.720330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.720356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.720505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.720531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.720682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.720707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.720892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.720918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.721066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.721092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.721236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.721270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.721448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.721474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.721624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.721650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.721797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.721822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.721966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.721992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.722156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.722195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.722326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.722354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.722535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.722575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.722755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.722797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.722971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.722997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.723126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.723152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.723311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.723338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.723511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.723537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.723689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.723716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.537 [2024-11-02 11:47:15.723862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.537 [2024-11-02 11:47:15.723887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.537 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.724035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.724065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.724190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.724216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.724362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.724391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.724516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.724542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.724690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.724717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.724887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.724913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.725062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.725090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.725297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.725325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.725472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.725499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.725670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.725696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.725827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.725854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.726026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.726053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.726196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.726222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.726352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.726379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.726536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.726578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.726735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.726777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.726961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.726987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.727162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.727188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.727357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.727384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.727530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.727570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.727721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.727747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.727924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.727950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.728077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.728104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.728280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.728308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.728430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.728457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.728630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.728670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.728849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.728875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.728995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.729022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.729168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.729194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.729323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.729350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.729555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.729583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.729715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.729740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.729927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.729953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.730124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.730151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.730274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.730301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.730507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.730534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.730689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.730730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.730912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.538 [2024-11-02 11:47:15.730938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.538 qpair failed and we were unable to recover it. 00:35:15.538 [2024-11-02 11:47:15.731111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.731138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.731282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.731309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.731531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.731562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.731733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.731759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.731939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.731965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.732171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.732196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.732361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.732388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.732531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.732557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.732760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.732786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.732967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.732993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.733137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.733165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.733318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.733346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.733598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.733624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.733770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.733812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.733996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.734022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.734147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.734173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.734368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.734396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.734518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.734544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.734721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.734747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.734891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.734933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.735089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.735115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.735261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.735288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.735441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.735468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.735688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.735713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.735848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.735875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.736051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.736077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.736232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.736271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.736452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.736479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.736628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.736670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.736867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.736907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.737036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.737064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.737183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.737209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.737437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.737651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.737700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.737897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.737941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.738125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.738152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.738320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.738351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.738513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.738542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.539 [2024-11-02 11:47:15.738727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.539 [2024-11-02 11:47:15.738770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.539 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.738976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.739019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.739199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.739225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.739406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.739449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.739645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.739693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.739868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.739913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.740108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.740134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.740281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.740449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.740496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.740660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.740703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.740871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.740916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.741069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.741096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.741250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.741282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.741483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.741527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.741669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.741711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.741907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.741951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.742101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.742127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.742300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.742326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.742502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.742546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.742704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.742748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.742896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.742924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.743049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.743075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.743232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.743264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.743411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.743437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.743608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.743634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.743752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.743780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.743930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.743956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.744082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.744108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.744307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.744337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.744517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.744560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.744764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.744808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.744962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.744989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.745162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.745187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.745360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.745404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.745589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.745616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.745757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.745932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.745958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.746108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.746134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.746327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.540 [2024-11-02 11:47:15.746371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.540 qpair failed and we were unable to recover it. 00:35:15.540 [2024-11-02 11:47:15.746525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.746550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.746710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.746737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.746884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.746910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.747082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.747108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.747262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.747289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.747436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.747467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.747736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.747788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.747966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.747991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.748111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.748138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.748303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.748522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.748570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.748738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.748781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.748959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.748985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.749110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.749136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.749307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.749337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.749518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.749561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.749761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.749790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.749979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.750004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.750180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.750205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.750368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.750395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.750534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.750577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.750747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.750790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.750937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.750963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.751140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.751167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.751335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.751379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.751521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.751695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.751722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.751846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.751874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.752025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.752222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.752248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.752429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.752455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.752604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.752630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.752777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.752803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.752980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.541 [2024-11-02 11:47:15.753005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.541 qpair failed and we were unable to recover it. 00:35:15.541 [2024-11-02 11:47:15.753179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.753204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.753368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.753411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.753543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.753586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.753751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.753796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.753925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.753951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.754081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.754106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.754278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.754305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.754501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.754545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.754756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.754799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.754925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.754951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.755123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.755150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.755323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.755358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.755579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.755605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.755751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.755777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.755959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.755985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.756112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.756138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.756262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.756288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.756443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.756469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.756641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.756667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.756837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.756862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.757006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.757031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.757180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.757205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.757388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.757431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.757604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.757635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.757798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.757828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.758051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.758079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.758270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.758315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.758486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.758513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.758694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.758725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.758890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.758920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.759082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.759111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.759279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.759307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.759462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.759488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.759660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.759689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.759879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.759909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.760044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.760075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.760270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.760313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.760459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.760487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.542 qpair failed and we were unable to recover it. 00:35:15.542 [2024-11-02 11:47:15.760626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.542 [2024-11-02 11:47:15.760661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.760828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.760859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.761053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.761084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.761248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.761286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.761437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.761463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.761736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.761790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.762081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.762108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.762261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.762289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.762445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.762474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.762653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.762683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.762873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.762903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.763066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.763092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.763234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.763268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.763457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.763483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.763803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.763853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.764005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.764184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.764211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.764398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.764428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.764596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.764626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.764814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.764844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.765013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.765039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.765186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.765212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.765386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.765415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.765587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.765614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.765791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.765817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.766224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.766254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.766420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.766451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.766621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.766651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.766883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.766912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.767063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.767091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.767215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.767243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.767448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.767477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.767648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.767674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.767816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.767842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.767970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.768168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.768194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.768342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.768373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.768595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.768624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.768850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.543 [2024-11-02 11:47:15.768898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.543 qpair failed and we were unable to recover it. 00:35:15.543 [2024-11-02 11:47:15.769091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.769117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.769269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.769319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.769505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.769535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.769774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.769816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.770022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.770048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.770226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.770251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.770436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.770465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.770658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.770688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.770875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.770928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.771095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.771121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.771271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.771298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.771453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.771482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.771738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.771785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.771912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.771941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.772104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.772131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.772267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.772473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.772502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.772744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.772770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.772942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.772969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.773144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.773169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.773345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.773375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.773576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.773620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.773837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.773868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.774063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.774089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.774209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.774385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.774415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.774579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.774605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.774754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.774779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.774915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.774941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.775065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.775091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.775215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.775242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.775442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.775484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.775657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.775688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.775854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.775881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.775996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.776022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.776199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.776224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.776418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.776448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.776663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.776710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.544 [2024-11-02 11:47:15.776950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.544 [2024-11-02 11:47:15.776997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.544 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.777193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.777218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.777350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.777376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.777527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.777565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.777776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.777864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.778057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.778205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.778360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.778510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.778683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.778852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.778982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.779021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.779179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.779206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.779362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.779389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.779523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.779549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.779722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.779748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.779893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.779918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.780068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.780095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.780226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.780252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.780397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.780423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.780567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.780592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.780742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.780768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.780895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.780920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.781096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.781124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.781271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.781298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.781481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.781507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.781679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.781705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.781857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.781883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.782028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.782054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.782228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.782262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.782389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.782420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.782536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.782562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.782708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.782733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.782922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.782947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.545 [2024-11-02 11:47:15.783119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.545 [2024-11-02 11:47:15.783145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.545 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.783273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.783300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.783452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.783477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.783616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.783641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.783787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.783813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.783960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.783985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.784108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.784134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.784287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.784313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.784464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.784490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.784611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.784638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.784760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.784785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.784907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.784932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.785054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.785079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.785197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.785223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.785355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.785381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.785518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.785558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.785747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.785774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.785956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.785983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.786106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.786133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.786310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.786337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.786480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.786507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.786680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.786706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.786882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.786909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.787022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.787053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.787200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.787226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.787357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.787383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.787510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.787535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.787684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.787710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.787850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.787876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.788028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.788054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.788201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.788359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.788386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.788506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.788531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.788677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.788702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.788872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.788897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.789051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.789076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.789222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.789247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.789379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.789404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.789531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.546 [2024-11-02 11:47:15.789556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.546 qpair failed and we were unable to recover it. 00:35:15.546 [2024-11-02 11:47:15.789696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.789721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.789826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.789851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.790003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.790029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.790174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.790199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.790329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.790356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.790476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.790502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.790675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.790701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.790876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.790901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.791054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.791080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.791202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.791227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.791404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.791430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.791547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.791576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.791695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.791721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.791832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.791857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.792004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.792029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.792184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.792224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.792355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.792383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.792520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.792549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.792721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.792747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.792875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.792902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.793075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.793101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.793231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.793262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.793387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.793412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.793544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.793570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.793692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.793719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.793901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.793927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.794082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.794107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.794264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.794289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.794435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.794460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.794608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.794633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.794783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.794809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.794979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.795004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.795177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.795202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.795319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.795345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.795485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.795511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.795661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.795686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.795851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.795876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.796000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.796026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.796172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.547 [2024-11-02 11:47:15.796201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.547 qpair failed and we were unable to recover it. 00:35:15.547 [2024-11-02 11:47:15.796374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.796413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.796567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.796596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.796777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.796804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.796954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.796981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.797126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.797153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.797305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.797332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.797446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.797473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.797592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.797618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.797796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.797824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.798003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.798030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.798180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.798205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.798404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.798430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.798546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.798573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.798724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.798750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.798892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.798917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.799103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.799242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.799275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.799424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.799449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.799595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.799624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.799740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.799768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.799918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.800122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.800149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.800319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.800345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.800501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.800527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.800675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.800701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.800873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.800899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.801054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.801085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.801218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.801264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.801461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.801500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.801683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.801710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.801886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.801912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.802058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.802084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.802235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.802266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.802414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.802440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.802573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.802599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.802740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.802766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.802915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.802941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.548 qpair failed and we were unable to recover it. 00:35:15.548 [2024-11-02 11:47:15.803166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.548 [2024-11-02 11:47:15.803191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.803316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.803343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.803485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.803511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.803638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.803663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.803789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.803814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.804038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.804064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.804187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.804213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.804370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.804396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.804542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.804568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.804743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.804768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.804917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.804944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.805169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.805195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.805378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.805404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.805531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.805557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.805708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.805735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.805883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.805909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.806141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.806168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.806341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.806368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.806523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.806548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.806696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.806723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.806834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.806861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.807035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.807060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.807213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.807239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.807370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.807396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.807551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.807582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.807762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.807789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.807932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.807958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.808102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.808128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.808302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.808329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.808446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.808477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.808598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.808625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.808752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.808778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.808929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.808955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.809116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.809142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.809295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.809321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.809461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.809487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.809632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.809658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.809809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.809835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.809967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.549 [2024-11-02 11:47:15.810007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.549 qpair failed and we were unable to recover it. 00:35:15.549 [2024-11-02 11:47:15.810162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.810189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.810421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.810447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.810565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.810591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.810748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.810774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.810938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.810964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.811113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.811140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.811324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.811350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.811479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.811504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.811651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.811677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.811847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.811873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.812023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.812048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.812172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.812199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.812394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.812433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.812637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.812664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.812821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.812848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.812975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.813003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.813129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.813156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.813305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.813343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.813467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.813495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.813722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.813747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.813934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.813961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.814143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.814169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.814299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.814326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.814486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.814514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.814657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.814685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.814886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.814913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.815085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.815111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.815289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.815316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.815446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.815474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.815651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.815677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.815862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.815889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.816016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.816044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.816220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.816246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.550 [2024-11-02 11:47:15.816393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.550 [2024-11-02 11:47:15.816419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.550 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.816607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.816633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.816819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.816860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.817043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.817069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.817224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.817251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.817383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.817409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.817561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.817588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.817764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.817789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.817942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.817968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.818125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.818150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.818348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.818374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.818559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.818585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.818760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.818786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.818940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.818966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.819090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.819116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.819243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.819276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.819427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.819453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.819603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.819629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.819771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.819797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.820028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.820054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.820183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.820210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.820377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.820416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.820542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.820570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.820746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.820772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.820892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.820924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.821100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.821126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.821267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.821306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.821469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.821497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.821620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.821646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.821798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.821824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.821968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.821994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.822153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.822179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.822353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.822380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.822523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.822550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.822728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.822753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.822901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.822927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.823075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.823101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.823275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.823302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.551 [2024-11-02 11:47:15.823455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.551 [2024-11-02 11:47:15.823481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.551 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.823627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.823653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.823839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.823864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.824040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.824066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.824214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.824241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.824371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.824398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.824547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.824573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.824721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.824748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.824877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.824904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.825090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.825129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.825287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.825315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.825495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.825521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.825676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.825703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.825863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.825902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.826057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.826084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.826210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.826389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.826414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.826562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.826588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.826760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.826786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.826905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.826930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.827078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.827104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.827277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.827304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.827533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.827560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.827712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.827740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.827919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.827945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.828097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.828123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.828242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.828273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.828403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.828430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.828595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.828622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.828817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.828843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.828991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.829018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.829206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.829231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.829386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.829412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.829534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.829560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.829707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.829733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.829852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.829878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.830023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.830048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.830219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.830244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.830422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.830447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.552 qpair failed and we were unable to recover it. 00:35:15.552 [2024-11-02 11:47:15.830565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.552 [2024-11-02 11:47:15.830590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.830770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.830796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.830952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.830977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.831129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.831154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.831304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.831331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.831461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.831487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.831658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.831683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.831791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.831816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.831992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.832165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.832305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.832483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.832647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.832818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.832972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.832997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.833144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.833169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.833311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.833337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.833486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.833512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.833658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.833683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.833823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.833849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.834026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.834051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.834231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.834263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.834411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.834436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.834554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.834579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.834725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.834750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.834891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.834916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.835068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.835092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.835237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.835396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.835426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.835571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.835597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.835747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.835772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.835890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.835917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.836092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.836117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.836269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.836295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.836477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.836516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.836672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.836701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.836844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.836871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.837016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.837043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.837190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.553 [2024-11-02 11:47:15.837216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.553 qpair failed and we were unable to recover it. 00:35:15.553 [2024-11-02 11:47:15.837377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.837404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.837525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.837552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.837703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.837729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.837858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.837885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.838060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.838086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.838246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.838284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.838435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.838461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.838575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.838602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.838754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.838780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.838923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.838949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.839126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.839154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.839335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.839361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.839487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.839514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.839660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.839685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.839805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.839831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.839979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.840004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.840150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.840177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.840309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.840336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.840481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.840507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.840653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.840679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.840854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.840880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.841963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.841990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.842107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.842133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.842260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.842287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.842439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.842465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.842608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.842634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.842784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.842810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.842980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.554 [2024-11-02 11:47:15.843005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.554 qpair failed and we were unable to recover it. 00:35:15.554 [2024-11-02 11:47:15.843175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.843201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.843356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.843382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.843500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.843526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.843646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.843674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.843854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.843880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.844024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.844050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.844198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.844225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.844376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.844402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.844518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.844544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.844697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.844724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.844889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.844915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.845064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.845090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.845247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.845281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.845408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.845435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.845597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.845638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.845796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.845825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.845954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.845981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.846128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.846155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.846332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.846359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.846486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.846513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.846640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.846666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.846813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.846841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.846991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.847023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.847148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.847174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.847357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.847384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.847534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.847560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.847712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.847739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.847855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.847882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.848035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.848062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.848183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.848210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.848339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.848366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.848517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.848543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.848694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.848720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.848871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.848898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.849049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.849075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.849193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.849219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.849379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.849406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.849570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.555 [2024-11-02 11:47:15.849596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.555 qpair failed and we were unable to recover it. 00:35:15.555 [2024-11-02 11:47:15.849720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.849746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.849864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.849890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.850065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.850091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.850265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.850305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.850458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.850485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.850658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.850685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.850829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.850856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.851017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.851056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.851240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.851273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.851426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.851453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.851603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.851629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.851796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.851835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.852038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.852085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.852230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.852281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.852464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.852508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.852738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.852783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.852964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.853016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.853153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.853194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.853343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.853389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.853561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.853591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.853767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.853793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.853943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.853969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.854082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.854109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.854268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.854307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.854449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.854484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.854663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.854692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.854882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.854929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.855100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.855128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.855325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.855354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.855501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.855528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.855669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.855694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.855868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.855912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.856057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.856101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.856289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.856316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.856476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.856519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.856786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.856832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.856966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.857009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.857173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.857199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.857371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.857417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.857623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.556 [2024-11-02 11:47:15.857667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.556 qpair failed and we were unable to recover it. 00:35:15.556 [2024-11-02 11:47:15.857899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.857950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.858093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.858121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.858280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.858308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.858512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.858557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.858780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.858824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.859001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.859027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.859233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.859265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.859433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.859476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.859754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.859799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.860065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.860109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.860281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.860308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.860454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.860502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.860680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.860723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.860890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.860918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.861056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.861084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.861228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.861267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.861474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.861521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.861721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.861769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.861939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.861984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.862103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.862129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.862253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.862287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.862412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.862438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.862556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.862582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.862754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.862780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.862952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.862980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.863145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.863184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.863370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.863407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.863596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.863625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.863787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.863816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.863947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.863974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.864090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.864118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.864315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.864345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.864572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.864605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.864740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.864769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.864967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.864992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.865141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.865167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.865358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.865388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.865564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.865592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.865759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.865788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.865967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.865993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.866115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.866141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.557 [2024-11-02 11:47:15.866289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.557 [2024-11-02 11:47:15.866338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.557 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.866525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.866555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.866776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.866825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.866993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.867020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.867193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.867219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.867397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.867427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.867653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.867703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.867855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.867884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.868055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.868081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.868200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.868226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.868448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.868484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.868760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.868806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.868977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.869003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.869174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.869200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.869380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.869410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.869543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.869586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.869795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.869821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.869972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.869999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.870121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.870147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.870293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.870321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.870444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.870470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.870623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.870649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.870831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.870857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.871034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.871214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.871240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.871403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.871445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.871649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.871679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.871921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.871973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.872152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.872178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.872328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.872373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.872519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.872545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.872712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.872742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.872986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.873015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.873179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.873205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.873380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.873410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.873618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.873647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.873869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.873899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.874093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.874119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.874245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.874277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.874445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.874475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.874651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.874680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.874848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.558 [2024-11-02 11:47:15.874874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.558 qpair failed and we were unable to recover it. 00:35:15.558 [2024-11-02 11:47:15.875026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.875053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.875200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.875226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.875403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.875432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.875662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.875691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.875853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.875879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.876029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.876055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.876203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.876230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.876391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.876417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.876594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.876629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.876819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.876846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.876966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.876993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.877141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.877167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.877329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.877360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.877523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.877549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.877678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.877705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.877880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.877907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.878048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.878074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.878222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.878250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.878445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.878474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.878708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.878905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.878932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.879080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.879106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.879300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.879330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.879581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.879610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.879836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.879866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.880058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.880085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.880267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.880311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.880500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.880530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.559 [2024-11-02 11:47:15.880690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.559 [2024-11-02 11:47:15.880719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.559 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.880915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.880941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.881087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.881113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.881309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.881339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.881500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.881530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.881786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.881815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.881980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.882006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.882162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.882189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.882365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.882392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.882531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.882681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.882709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.882862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.882888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.883012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.883038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.883184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.883210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.883363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.883390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.883568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.883594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.883743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.883769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.883950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.883976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.884104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.884132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.884328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.884356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.560 [2024-11-02 11:47:15.884501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.560 [2024-11-02 11:47:15.884532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.560 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.884681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.884708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.884862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.884888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.884998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.885024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.885140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.885166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.885315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.885343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.885516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.885542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.885714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.885740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.885887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.885915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.886073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.886112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.886296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.886325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.886479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.886506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.886653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.886680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.886797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.886823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.887024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.845 [2024-11-02 11:47:15.887054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.845 qpair failed and we were unable to recover it. 00:35:15.845 [2024-11-02 11:47:15.887169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.887196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.887374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.887401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.887525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.887551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.887724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.887752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.887875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.887902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.888049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.888078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.888250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.888284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.888421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.888449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.888607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.888637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.888809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.888838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.888952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.888977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.889129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.889156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.889285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.889312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.889462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.889488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.889638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.889665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.889811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.889838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.889983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.890010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.890154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.890180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.890299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.890326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.890465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.890491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.890635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.890661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.890809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.890835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.890981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.891155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.891300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.891469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.891649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.891820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.891962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.891989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.892165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.892191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.892340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.892366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.892481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.892507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.892637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.892663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.892835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.892860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.893042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.893067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.893215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.893240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.893393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.893421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.846 [2024-11-02 11:47:15.893562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.846 [2024-11-02 11:47:15.893588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.846 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.893760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.893786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.893941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.893967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.894114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.894140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.894287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.894313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.894454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.894480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.894653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.894679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.894822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.894848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.894991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.895138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.895295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.895474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.895614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.895791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.895969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.895995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.896174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.896200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.896376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.896402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.896551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.896578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.896732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.896758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.896907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.896933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.897082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.897108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.897226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.897253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.897420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.897446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.897595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.897622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.897804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.897830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.897978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.898004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.898161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.898187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.898361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.898387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.898533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.898564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.898680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.898706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.898831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.898858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.899008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.899034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.899155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.899181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.899353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.899379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.899526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.899553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.899696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.899722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.899840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.899866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.900008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.900034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.900152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.847 [2024-11-02 11:47:15.900177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.847 qpair failed and we were unable to recover it. 00:35:15.847 [2024-11-02 11:47:15.900349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.900376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.900491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.900516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.900660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.900687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.900868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.900894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.901068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.901093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.901223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.901269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.901428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.901456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.901591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.901617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.901726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.901752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.901896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.901922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.902040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.902066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.902183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.902208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.902327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.902353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.902505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.902532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.902687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.902713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.902890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.902916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.903083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.903109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.903254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.903286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.903459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.903485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.903604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.903630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.903751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.903778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.903952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.903979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.904127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.904154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.904286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.904312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.904456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.904483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.904596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.904622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.904767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.904794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.904967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.904993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.905150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.905177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.905320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.905347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.905502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.905528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.905673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.905699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.905871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.905897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.906019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.906048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.906195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.906220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.906343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.906369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.906487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.906514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.906673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.906699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.848 [2024-11-02 11:47:15.906843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.848 [2024-11-02 11:47:15.906869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.848 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.907015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.907041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.907160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.907186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.907352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.907380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.907554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.907580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.907758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.907784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.907960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.907985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.908105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.908133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.908253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.908285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.908463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.908489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.908666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.908692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.908835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.908861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.909009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.909035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.909180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.909207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.909361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.909387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.909534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.909560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.909730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.909756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.909907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.909932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.910082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.910112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.910265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.910294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.910471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.910496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.910625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.910652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.910801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.910829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.910976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.911003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.911152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.911178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.911334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.911362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.911513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.911539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.911685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.911712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.911885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.911911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.912031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.912058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.912202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.912229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.912358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.912386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.912567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.912593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.912746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.912772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.912893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.912920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.913068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.913094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.913213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.913408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.913442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.913603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.849 [2024-11-02 11:47:15.913631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.849 qpair failed and we were unable to recover it. 00:35:15.849 [2024-11-02 11:47:15.913832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.913876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.914000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.914026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.914200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.914227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.914403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.914447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.914613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.914657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.914801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.914828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.914954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.914983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.915136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.915164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.915324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.915351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.915519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.915548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.915725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.915755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.915940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.915968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.916129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.916158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.916337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.916363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.916504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.916534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.916728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.916757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.916892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.916923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.917078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.917108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.917284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.917313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.917459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.917489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.917638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.917682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.917858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.917907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.918076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.918122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.918268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.918295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.918439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.918483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.918662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.918708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.918873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.918916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.919075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.919102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.919224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.919250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.919454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.919498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.919676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.919720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.919953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.919980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.850 qpair failed and we were unable to recover it. 00:35:15.850 [2024-11-02 11:47:15.920094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.850 [2024-11-02 11:47:15.920120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.920253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.920288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.920402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.920429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.920568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.920598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.920784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.920827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.920975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.921001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.921148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.921175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.921374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.921418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.921588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.921635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.921943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.922174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.922200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.922401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.922446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.922588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.922632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.922809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.922853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.923033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.923065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.923263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.923293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.923463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.923492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.923630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.923661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.923868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.923897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.924062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.924091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.924248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.924286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.924416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.924443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.924642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.924671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.924852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.924882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.925043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.925072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.925206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.925235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.925413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.925440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.925577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.925606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.925775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.925805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.925961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.925991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.926127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.926156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.926309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.926337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.926502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.926532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.926746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.926791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.926954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.926998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.927138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.927164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.927329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.927374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.927541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.927584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.927819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.851 [2024-11-02 11:47:15.927867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.851 qpair failed and we were unable to recover it. 00:35:15.851 [2024-11-02 11:47:15.928044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.928074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.928266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.928293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.928481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.928508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.928677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.928720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.928951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.928999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.929171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.929198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.929350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.929545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.929589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.929787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.929831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.930076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.930126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.930275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.930302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.930432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.930476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.930621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.930829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.930873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.931016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.931215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.931246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.931398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.931443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.931615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.931660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.931826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.931871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.932012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.932039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.932215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.932241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.932382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.932426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.932573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.932617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.932785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.932827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.932945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.932971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.933119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.933146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.933292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.933319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.933463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.933489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.933613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.933640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.933794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.933822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.933992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.934018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.934128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.934155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.934352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.934383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.934540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.934584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.934740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.934784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.934939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.934966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.935144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.935171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.935321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.852 [2024-11-02 11:47:15.935348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.852 qpair failed and we were unable to recover it. 00:35:15.852 [2024-11-02 11:47:15.935469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.935496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.935668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.935694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.935820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.935847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.935991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.936017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.936163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.936190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.936356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.936401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.936573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.936617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.936810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.936854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.936997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.937172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.937199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.937326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.937551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.937595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.937724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.937768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.937918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.937944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.938097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.938124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.938247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.938280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.938445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.938489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.938630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.938676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.938864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.938891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.939041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.939068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.939240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.939281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.939432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.939458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.939600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.939644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.939844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.939888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.940041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.940068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.940241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.940273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.940476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.940520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.940721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.940997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.941049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.941195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.941221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.941406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.941450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.941627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.941674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.941868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.941912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.942063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.942088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.942267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.942295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.942460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.942505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.942654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.942698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.942888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.942918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.853 qpair failed and we were unable to recover it. 00:35:15.853 [2024-11-02 11:47:15.943096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.853 [2024-11-02 11:47:15.943122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.943271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.943298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.943450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.943476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.943624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.943650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.943808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.943837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.944021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.944050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.944217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.944247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.944428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.944458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.944595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.944626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.944820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.944849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.944984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.945013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.945201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.945229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.945385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.945413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.945562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.945606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.945801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.945844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.945985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.946028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.946182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.946209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.946371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.946398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.946568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.946597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.946873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.946926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.947076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.947103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.947247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.947281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.947444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.947470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.947614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.947658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.947826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.947871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.948044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.948070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.948221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.948249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.948434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.948460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.948658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.948704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.948878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.948922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.949071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.949098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.949267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.949312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.949481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.949510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.949704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.854 [2024-11-02 11:47:15.949733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.854 qpair failed and we were unable to recover it. 00:35:15.854 [2024-11-02 11:47:15.949916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.949945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.950104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.950133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.950324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.950352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.950544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.950574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.950757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.950786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.951010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.951039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.951198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.951227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.951431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.951458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.951661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.951690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.951879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.951908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.952075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.952103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.952291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.952334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.952466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.952493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.952686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.952713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.952885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.952915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.953044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.953074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.953268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.953312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.953461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.953487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.953657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.953686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.953869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.953897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.954060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.954089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.954222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.954247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.954405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.954431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.954592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.954622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.954812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.954842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.954977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.955011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.955170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.955200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.955379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.955406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.955603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.955632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.955770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.955796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.955912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.955938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.956118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.956146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.956323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.956350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.956477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.956502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.956651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.956677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.956881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.956910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.957069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.957097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.957231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.957262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.855 qpair failed and we were unable to recover it. 00:35:15.855 [2024-11-02 11:47:15.957447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.855 [2024-11-02 11:47:15.957472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.957617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.957646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.957770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.957799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.958026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.958055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.958223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.958251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.958424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.958451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.958649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.958811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.958854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.959029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.959057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.959217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.959246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.959405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.959431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.959581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.959607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.959769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.959797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.959926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.959956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.960106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.960135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.960326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.960353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.960503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.960546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.960708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.960734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.960879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.960904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.961083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.961113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.961267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.961294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.961440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.961466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.961609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.961653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.961841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.961869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.962037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.962065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.962224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.962253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.962445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.962471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.962615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.962645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.962817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.962846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.963004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.963034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.963163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.963192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.963386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.963412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.963563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.963607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.963781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.963807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.964007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.964036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.964174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.964200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.964354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.964401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.964536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.964565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.964737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.856 [2024-11-02 11:47:15.964763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.856 qpair failed and we were unable to recover it. 00:35:15.856 [2024-11-02 11:47:15.964910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.964936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.965126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.965155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.965344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.965374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.965541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.965570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.965760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.965786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.965902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.966104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.966132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.966331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.966357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.966485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.966510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.966685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.966711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.966862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.966888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.967037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.967062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.967205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.967230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.967356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.967383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.967529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.967555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.967726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.967755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.967952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.967977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.968174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.968203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.968336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.968367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.968544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.968573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.968763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.968789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.968911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.968936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.969111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.969137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.969298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.969325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.969440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.969466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.969643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.969668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.969808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.969837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.969985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.970011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.970125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.970155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.970335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.970366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.970553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.970581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.970746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.970777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.970944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.970970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.971118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.971143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.971333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.971359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.971478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.971505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.971650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.971676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.971829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.857 [2024-11-02 11:47:15.971856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.857 qpair failed and we were unable to recover it. 00:35:15.857 [2024-11-02 11:47:15.972021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.972050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.972209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.972238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.972411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.972437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.972611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.972637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.972750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.972776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.972925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.972954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.973100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.973127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.973302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.973349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.973512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.973541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.973730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.973758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.973928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.973953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.974128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.974153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.974281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.974307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.974479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.974507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.974673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.974699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.974867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.974896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.975080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.975119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.975317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.975350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.975525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.975551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.975717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.975746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.975908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.975937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.976091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.976119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.976290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.976317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.976437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.976479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.976638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.976666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.976827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.976855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.977052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.977077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.977239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.977274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.977437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.977463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.977634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.977662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.977832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.977857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.978056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.978085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.978216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.978244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.978420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.978445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.978593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.978619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.978807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.978836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.979000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.979025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.979195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.979237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.858 [2024-11-02 11:47:15.979394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.858 [2024-11-02 11:47:15.979421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.858 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.979543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.979568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.979748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.979773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.979894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.979920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.980042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.980069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.980216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.980242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.980408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.980447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.980654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.980684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.980853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.980879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.981004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.981048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.981235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.981273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.981423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.981449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.981628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.981654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.981824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.981852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.981997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.982022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.982194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.982220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.982413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.982439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.982557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.982583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.982799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.982825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.982994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.983024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.983199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.983224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.983388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.983414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.983564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.983591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.983766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.983792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.983908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.983933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.984051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.984076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.984245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.984281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.984439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.984612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.984638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.984792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.984818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.984968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.985013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.985198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.985227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.985398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.985425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.985561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.985587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.985730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.859 [2024-11-02 11:47:15.985756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.859 qpair failed and we were unable to recover it. 00:35:15.859 [2024-11-02 11:47:15.985951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.985980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.986169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.986194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.986339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.986366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.986491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.986518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.986691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.986719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.986863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.986888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.987010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.987036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.987207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.987235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.987414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.987439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.987586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.987612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.987731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.987774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.987931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.987959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.988125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.988154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.988348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.988374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.988515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.988540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.988683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.988725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.988887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.988915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.989081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.989106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.989301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.989362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.989551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.989577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.989725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.989753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.989920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.989945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.990065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.990106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.990279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.990323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.990447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.990472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.990606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.990632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.990804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.990847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.991029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.991057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.991203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.991229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.991362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.991388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.991535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.991578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.991732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.991760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.991927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.991955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.992126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.992151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.992325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.992350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.992494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.992520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.860 [2024-11-02 11:47:15.992720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.860 [2024-11-02 11:47:15.992749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.860 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.992923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.992948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.993097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.993122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.993307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.993363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.993520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.993547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.993718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.993744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.993942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.993971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.994134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.994162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.994368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.994395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.994570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.994596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.994796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.994822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.994992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.995035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.995197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.995228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.995410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.995437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.995590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.995615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.995819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.995847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.995985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.996015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.996189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.996214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.996371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.996398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.996523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.996550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.996707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.996749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.996950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.996976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.997105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.997131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.997279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.997307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.997459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.997485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.997634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.997660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.997856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.997885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.998017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.998045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.998181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.998209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.998387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.998417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.998576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.998618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.998774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.998802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.998937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.998965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.999106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.999131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.999275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.999301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.999486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.999512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.999701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.999728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:15.999898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:15.999924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.861 [2024-11-02 11:47:16.000074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.861 [2024-11-02 11:47:16.000115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.861 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.000285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.000311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.000436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.000463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.000616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.000641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.000815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.000841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.000987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.001029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.001165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.001194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.001360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.001386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.001540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.001566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.001710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.001736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.001860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.001885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.002055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.002081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.002229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.002277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.002442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.002467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.002616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.002641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.002793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.002819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.003009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.003037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.003205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.003231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.003420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.003446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.003567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.003594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.003737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.003762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.003959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.003988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.004173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.004201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.004362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.004388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.004508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.004534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.004701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.004729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.004937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.004963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.005088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.005114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.005293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.005319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.005461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.005487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.005635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.005661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.005833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.005859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.005994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.006033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.006196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.006235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.006401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.006430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.006580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.006607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.006748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.006774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.006944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.006970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.007098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.862 [2024-11-02 11:47:16.007123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.862 qpair failed and we were unable to recover it. 00:35:15.862 [2024-11-02 11:47:16.007243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.007280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.007453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.007479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.007628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.007654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.007765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.007791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.007965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.007990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.008111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.008139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.008263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.008290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.008469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.008495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.008674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.008700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.008846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.008871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.009015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.009041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.009164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.009190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.009366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.009392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.009544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.009752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.009778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.009904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.009930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.010056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.010083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.010202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.010230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.010385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.010412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.010543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.010571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.010751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.010777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.010889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.010915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.011061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.011087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.011228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.011264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.011413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.011440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.011561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.011587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.011716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.011743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.011922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.011948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.012103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.012130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.012279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.012307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.012459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.012485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.012630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.012786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.012814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.863 [2024-11-02 11:47:16.012958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.863 [2024-11-02 11:47:16.012988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.863 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.013137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.013162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.013318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.013354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.013536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.013562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.013724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.013752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.013922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.013948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.014099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.014124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.014241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.014274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.014393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.014419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.014534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.014704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.014731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.014878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.014903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.015054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.015079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.015230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.015265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.015395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.015421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.015545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.015570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.015694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.015719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.015840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.015865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.016014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.016039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.016158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.016184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.016336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.016362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.016510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.016536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.016692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.016718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.016893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.016919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.017088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.017114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.017285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.017311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.017458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.017483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.017624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.017653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.017798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.017824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.017981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.018006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.018125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.018150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.018291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.018317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.018470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.018495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.018647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.018673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.018829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.018855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.018978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.019003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.019162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.019187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.019364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.019390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.019514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.019539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.864 qpair failed and we were unable to recover it. 00:35:15.864 [2024-11-02 11:47:16.019681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.864 [2024-11-02 11:47:16.019707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.019850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.019875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.020004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.020030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.020151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.020335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.020361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.020542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.020567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.020715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.020740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.020884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.020908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.021053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.021078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.021228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.021253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.021417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.021442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.021557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.021582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.021731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.021757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.021905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.021930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.022101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.022126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.022259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.022284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.022438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.022464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.022613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.022639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.022787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.022812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.022961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.022986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.023105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.023132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.023306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.023332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.023484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.023509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.023632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.023657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.023783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.023809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.023928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.023954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.024095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.024121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.024235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.024270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.024423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.024449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.024608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.024647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.024770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.024798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.024926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.024954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.025107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.025134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.025281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.025309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.025460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.025486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.025599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.025625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.025774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.025800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.025974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.026001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.026149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.865 [2024-11-02 11:47:16.026174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.865 qpair failed and we were unable to recover it. 00:35:15.865 [2024-11-02 11:47:16.026322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.026349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.026491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.026517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.026686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.026880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.026905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.027035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.027061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.027188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.027213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.027388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.027415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.027585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.027611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.027752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.027777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.027897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.027922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.028060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.028085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.028200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.028226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.028380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.028407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.028558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.028585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.028813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.028838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.028952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.028978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.029127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.029153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.029323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.029352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.029482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.029508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.029630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.029656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.029879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.029905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.030092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.030117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.030269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.030297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.030420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.030445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.030571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.030597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.030746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.030772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.030916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.030942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.031066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.031092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.031237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.031280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.031432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.031457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.031604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.031629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.031811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.031836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.031947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.031972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.032120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.032145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.032294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.032466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.032491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.032645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.032671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.032845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.866 [2024-11-02 11:47:16.032871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.866 qpair failed and we were unable to recover it. 00:35:15.866 [2024-11-02 11:47:16.033044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.033069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.033218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.033244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.033379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.033404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.033536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.033561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.033701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.033726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.033840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.033866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.034039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.034068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.034294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.034321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.034445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.034471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.034638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.034663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.034819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.034844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.034991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.035163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.035334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.035485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.035635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.035782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.035943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.035968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.036113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.036139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.036323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.036350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.036497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.036522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.036666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.036692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.036841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.036866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.036984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.037009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.037153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.037317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.037343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.037515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.037540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.037715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.037740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.037911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.037936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.038080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.038105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.038252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.038284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.038432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.038457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.038582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.038607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.038754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.038783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.038914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.038939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.039085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.039110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.039283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.867 [2024-11-02 11:47:16.039309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.867 qpair failed and we were unable to recover it. 00:35:15.867 [2024-11-02 11:47:16.039482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.039508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.039650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.039675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.039833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.039858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.039981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.040007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.040148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.040173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.040348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.040374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.040515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.040540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.040691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.040716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.040862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.040889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.041009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.041034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.041201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.041226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.041346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.041371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.041498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.041525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.041644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.041669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.041840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.041865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.042012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.042038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.042208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.042234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.042404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.042430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.042551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.042576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.042721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.042747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.042862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.042887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.043036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.043061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.043187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.043214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.043350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.043380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.043530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.043555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.043723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.043748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.043898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.043924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.044040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.044065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.044202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.044228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.044391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.044417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.044548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.044573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.044723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.044748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.044875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.044900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.045027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.868 [2024-11-02 11:47:16.045053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.868 qpair failed and we were unable to recover it. 00:35:15.868 [2024-11-02 11:47:16.045225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.045250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.045429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.045454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.045599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.045625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.045755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.045780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.045955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.045981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.046120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.046145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.046280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.046306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.046458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.046484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.046632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.046657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.046807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.046833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.046983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.047009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.047152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.047178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.047355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.047381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.047522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.047547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.047696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.047721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.047844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.047870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.048024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.048050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.048174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.048350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.048376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.048554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.048580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.048726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.048751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.048898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.048924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.049095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.049121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.049295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.049321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.049436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.049462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.049576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.049602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.049750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.049775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.049950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.049975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.050130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.050156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.050327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.050353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.050474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.050500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.050647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.050672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.050821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.050847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.050967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.051162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.051187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.051330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.051356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.051475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.051501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.051638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.051664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.051812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.869 [2024-11-02 11:47:16.051837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.869 qpair failed and we were unable to recover it. 00:35:15.869 [2024-11-02 11:47:16.051982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.052008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.052182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.052207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.052347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.052373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.052530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.052555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.052669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.052695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.052875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.053045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.053070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.053222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.053247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.053375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.053400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.053544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.053569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.053721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.053746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.053904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.053929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.054096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.054121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.054266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.054293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.054413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.054439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.054579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.054604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.054763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.054789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.054935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.054961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.055131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.055163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.055318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.055344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.055497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.055522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.055674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.055700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.055824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.055849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.056022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.056047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.056190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.056216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.056380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.056406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.056580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.056605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.056723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.056750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.056921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.056947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.057123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.057148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.057268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.057294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.057440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.057466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.057582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.057607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.057757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.057784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.057910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.057935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.058111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.058137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.058288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.058314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.058468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.058494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.870 [2024-11-02 11:47:16.058636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.870 [2024-11-02 11:47:16.058662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.870 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.058808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.058833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.058978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.059003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.059175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.059201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.059349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.059376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.059497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.059522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.059646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.059672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.059823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.059853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.060034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.060059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.060202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.060227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.060365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.060390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.060542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.060568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.060717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.060742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.060911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.060936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.061106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.061132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.061284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.061311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.061466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.061491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.061638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.061665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.061783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.061810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.061935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.061961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.062111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.062137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.062296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.062322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.062474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.062499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.062642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.062668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.062782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.062807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.062952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.062978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.063103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.063128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.063304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.063330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.063485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.063511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.063657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.063683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.063790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.063815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.063941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.063967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.064085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.064112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.064234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.064272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.064399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.064425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.064587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.064613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.064734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.064759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.064908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.064934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.065055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.065081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.871 qpair failed and we were unable to recover it. 00:35:15.871 [2024-11-02 11:47:16.065231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.871 [2024-11-02 11:47:16.065262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.065416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.065441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.065619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.065644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.065788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.065814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.065984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.066009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.066156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.066182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.066322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.066349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.066507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.066533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.066657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.066682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.066830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.066857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.067010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.067036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.067180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.067205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.067377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.067403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.067552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.067578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.067727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.067754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.067907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.067932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.068111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.068136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.068296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.068322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.068444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.068469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.068652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.068677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.068828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.068854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.069025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.069051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.069169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.069194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.069344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.069371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.069534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.069560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.069709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.069734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.069882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.069908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.070056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.070081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.070254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.070285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.070413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.070438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.070586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.070611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.070732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.070758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.070906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.070931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.872 [2024-11-02 11:47:16.071088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.872 [2024-11-02 11:47:16.071113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.872 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.071291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.071317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.071467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.071493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.071651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.071681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.071835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.071860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.072013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.072038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.072156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.072182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.072327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.072353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.072522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.072547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.072692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.072719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.072828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.072853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.073001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.073027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.073200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.073225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.073381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.073407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.073526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.073551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.073713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.073739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.073883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.073909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.074055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.074080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.074225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.074251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.074375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.074400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.074573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.074599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.074756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.074782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.074901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.074926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.075103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.075279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.075305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.075453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.075478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.075605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.075630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.075771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.075797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.075912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.075937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.076058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.076083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.076232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.076281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.076447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.076473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.076644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.076670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.076818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.076843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.076960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.076986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.077135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.077161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.077303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.077329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.077502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.873 [2024-11-02 11:47:16.077527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.873 qpair failed and we were unable to recover it. 00:35:15.873 [2024-11-02 11:47:16.077684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.077710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.077854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.077879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.078022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.078047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.078205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.078231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.078351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.078377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.078525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.078550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.078675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.078700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.078848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.078874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.079017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.079042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.079191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.079217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.079398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.079424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.079566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.079592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.079741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.079766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.079892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.079917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.080064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.080090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.080237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.080276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.080418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.080443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.080619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.080645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.080820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.080845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.080961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.080991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.081134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.081159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.081279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.081305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.081457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.081483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.081654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.081679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.081823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.081849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.082009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.082034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.082181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.082206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.082362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.082388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.082510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.082535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.082711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.082736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.082879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.082904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.083080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.083216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.083380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.083524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.083692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.083864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.083974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.084000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.084115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.874 [2024-11-02 11:47:16.084140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.874 qpair failed and we were unable to recover it. 00:35:15.874 [2024-11-02 11:47:16.084282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.084308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.084455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.084480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.084630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.084655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.084801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.084826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.084938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.084964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.085084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.085110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.085223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.085248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.085379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.085404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.085533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.085558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.085712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.085737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.085890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.085915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.086072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.086097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.086261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.086287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.086434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.086460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.086614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.086639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.086786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.086811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.086959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.086985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.087131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.087156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.087316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.087342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.087476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.087502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.087620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.087647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.087766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.087792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.087966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.087992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.088141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.088167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.088318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.088344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.088496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.088522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.088664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.088689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.088853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.088879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.088998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.089023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.089168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.089193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.089350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.089376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.089527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.089552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.089698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.089723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.089864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.089889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.090038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.090063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.090219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.090245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.090402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.090428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.090574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.090600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.875 [2024-11-02 11:47:16.090727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.875 [2024-11-02 11:47:16.090753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.875 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.090923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.090948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.091123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.091148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.091293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.091320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.091476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.091502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.091650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.091676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.091801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.091827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.091973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.091998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.092171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.092197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.092368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.092394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.092516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.092546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.092659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.092685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.092868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.092893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.093043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.093069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.093181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.093207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.093382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.093409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.093570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.093596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.093768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.093794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.093942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.093969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.094105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.094130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.094262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.094289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.094425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.094451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.094625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.094651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.094773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.094799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.094978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.095004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.095152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.095178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.095329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.095355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.095514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.095540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.095713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.095738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.095882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.095907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.096079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.096105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.096274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.096300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.096430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.096455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.096603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.096629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.096805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.096830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.096953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.096979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.097131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.097157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.097301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.097331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.097507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.097533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.876 [2024-11-02 11:47:16.097680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.876 [2024-11-02 11:47:16.097706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.876 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.097880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.097905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.098046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.098072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.098216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.098242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.098369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.098395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.098545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.098570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.098735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.098763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.098928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.098957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.099129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.099155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.099277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.099303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.099450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.099475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.099647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.099675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.099829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.099855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.099996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.100022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.100166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.100192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.100367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.100410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.100563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.100592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.100759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.100784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.100910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.101078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.101103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.101228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.101254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.101456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.101482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.101641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.101670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.101833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.101863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.102023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.102051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.102244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.102276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.102454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.102482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.102615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.102643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.102811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.102837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.102982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.103007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.103202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.103230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.103375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.103403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.103571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.103599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.103796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.103822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.877 [2024-11-02 11:47:16.103969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.877 [2024-11-02 11:47:16.103994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.877 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.104168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.104211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.104376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.104405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.104580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.104606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.104721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.104747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.104925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.104954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.105115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.105145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.105282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.105308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.105479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.105685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.105714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.105872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.105900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.106069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.106095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.106291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.106320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.106485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.106513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.106675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.106703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.106852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.106877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.107026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.107051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.107165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.107191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.107360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.107389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.107590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.107615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.107784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.107812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.108002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.108030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.108200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.108228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.108411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.108437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.108605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.108633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.108797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.108826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.108984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.109013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.109180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.109205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.109368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.109398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.109564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.109592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.109757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.109783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.109929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.109955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.110118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.110152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.110285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.110314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.110478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.110506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.110639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.110665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.110789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.110814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.111024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.111053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.111219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.111247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.878 qpair failed and we were unable to recover it. 00:35:15.878 [2024-11-02 11:47:16.111402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.878 [2024-11-02 11:47:16.111429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.111554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.111580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.111757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.111786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.111957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.111986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.112151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.112177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.112378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.112407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.112576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.112604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.112805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.112830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.112979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.113004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.113117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.113159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.113327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.113356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.113512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.113541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.113687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.113712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.113864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.113889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.114022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.114051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.114215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.114243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.114420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.114445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.114595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.114639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.114817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.114843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.114961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.114988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.115129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.115159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.115311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.115355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.115543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.115571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.115760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.115788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.115934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.115959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.116111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.116154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.116314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.116343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.116487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.116512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.116665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.116690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.116817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.116843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.116988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.117013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.117182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.117208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.117365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.117392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.117504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.117530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.117720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.117750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.117950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.117975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.118123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.118150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.118341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.118370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.118503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.879 [2024-11-02 11:47:16.118531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.879 qpair failed and we were unable to recover it. 00:35:15.879 [2024-11-02 11:47:16.118653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.118682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.118856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.118882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.119042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.119067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.119267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.119296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.119466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.119494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.119635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.119661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.119853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.119881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.120043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.120071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.120221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.120253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.120408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.120434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.120558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.120583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.120707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.120732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.120914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.120942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.121092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.121117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.121291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.121317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.121466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.121494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.121664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.121690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.121868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.121894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.122011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.122036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.122187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.122355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.122384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.122549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.122574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.122734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.122760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.122884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.122927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.123092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.123121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.123291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.123317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.123441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.123467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.123614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.123640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.123806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.123834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.124004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.124028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.124189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.124217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.124388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.124417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.124581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.124609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.124780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.124806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.124932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.124958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.125083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.125108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.125231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.125263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.125438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.125464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.125635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.880 [2024-11-02 11:47:16.125663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.880 qpair failed and we were unable to recover it. 00:35:15.880 [2024-11-02 11:47:16.125821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.125849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.126004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.126031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.126197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.126223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.126379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.126404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.126581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.126609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.126791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.126820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.127013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.127038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.127183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.127223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.127400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.127429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.127599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.127625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.127776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.127801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.127930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.127955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.128131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.128161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.128330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.128359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.128526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.128552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.128741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.128769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.128926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.128955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.129158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.129184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.129329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.129355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.129511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.129552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.129743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.129771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.129929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.129957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.130146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.130171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.130338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.130367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.130542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.130718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.130746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.130941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.130967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.131134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.131162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.131326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.131354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.131516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.131545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.131711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.131736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.131887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.131912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.132058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.132086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.132284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.132310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.132465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.132491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.132638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.132663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.132813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.132838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.881 [2024-11-02 11:47:16.132979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.881 [2024-11-02 11:47:16.133012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.881 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.133211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.133237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.133392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.133418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.133570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.133612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.133779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.133807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.133967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.133992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.134131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.134173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.134365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.134394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.134561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.134590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.134742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.134768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.134911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.134936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.135106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.135148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.135336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.135365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.135559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.135584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.135749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.135777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.135950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.135976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.136128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.136153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.136326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.136352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.136525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.136554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.136754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.137097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.137157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.137330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.137356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.137480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.137523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.137717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.137745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.137932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.137960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.138129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.138154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.138314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.138356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.138527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.138560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.138723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.138751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.138891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.138917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.139066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.139091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.139286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.139315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.139477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.139505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.139674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.139699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.139861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.139889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.140078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.882 [2024-11-02 11:47:16.140106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.882 qpair failed and we were unable to recover it. 00:35:15.882 [2024-11-02 11:47:16.140326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.140355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.140494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.140520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.140679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.140723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.140889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.140917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.141104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.141132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.141303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.141329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.141495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.141523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.141649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.141677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.141859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.141888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.142056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.142081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.142242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.142276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.142468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.142493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.142638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.142663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.142817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.142842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.142960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.143004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.143201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.143227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.143386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.143412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.143529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.143555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.143681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.143707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.143900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.143926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.144078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.144104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.144248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.144286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.144429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.144458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.144597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.144625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.144810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.144838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.144979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.145004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.145178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.145202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.145363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.145392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.145566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.145592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.145708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.145733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.145854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.145879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.146038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.146066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.146234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.146266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.146414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.146440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.146560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.146604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.146747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.146776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.146910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.146938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.147073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.147099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.147212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.883 [2024-11-02 11:47:16.147237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.883 qpair failed and we were unable to recover it. 00:35:15.883 [2024-11-02 11:47:16.147420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.147449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.147639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.147668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.147834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.147859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.147997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.148040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.148212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.148240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.148447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.148475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.148641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.148667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.148844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.148885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.149044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.149072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.149233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.149271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.149409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.149435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.149576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.149601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.149738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.149766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.149920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.149948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.150084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.150110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.150268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.150295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.150422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.150449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.150656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.150682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.150796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.150822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.151012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.151041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.151178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.151212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.151406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.151435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.151619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.151644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.151765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.151791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.151942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.151967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.152140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.152169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.152304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.152330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.152481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.152507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.152624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.152649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.152793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.152821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.152980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.153006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.153152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.153193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.153385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.153414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.153551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.153580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.153751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.153777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.153969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.153997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.154144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.154170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.154311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.154337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.884 [2024-11-02 11:47:16.154513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.884 [2024-11-02 11:47:16.154538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.884 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.154688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.154714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.154867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.154893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.155041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.155066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.155195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.155351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.155377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.155497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.155523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.155700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.155728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.155870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.155897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.156047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.156077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.156225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.156251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.156428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.156455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.156602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.156627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.156794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.156822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.156980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.157008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.157196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.157224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.157403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.157428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.157563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.157589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.157741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.157782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.157915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.157943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.158117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.158143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.158316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.158342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.158515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.158543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.158733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.158761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.158907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.158932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.159074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.159115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.159277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.159467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.159496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.159665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.159691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.159836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.159879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.160011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.160039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.160176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.160206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.160373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.160399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.160543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.160569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.160745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.160774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.160935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.160963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.161121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.161151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.161341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.161370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.161536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.161565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.885 [2024-11-02 11:47:16.161737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.885 [2024-11-02 11:47:16.161763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.885 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.161910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.161936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.162077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.162117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.162279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.162308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.162440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.162468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.162610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.162635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.162760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.162785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.162986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.163014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.163198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.163227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.163377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.163402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.163519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.163546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.163728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.163956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.163982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.164096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.164121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.164266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.164292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.164442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.164470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.164584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.164609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.164751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.164776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.164948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.164975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.165128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.165154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.165299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.165342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.165477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.165503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.165674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.165730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.165938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.165964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.166086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.166114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.166306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.166332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.166528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.166557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.166757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.166784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.166954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.166979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.167101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.167127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.167322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.167350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.167510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.167538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.167724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.167752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.167924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.167949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.168097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.886 [2024-11-02 11:47:16.168139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.886 qpair failed and we were unable to recover it. 00:35:15.886 [2024-11-02 11:47:16.168304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.168333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.168506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.168648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.168673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.168824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.168867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.169061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.169089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.169272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.169301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.169475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.169500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.169628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.169671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.169835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.169865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.170064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.170090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.170269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.170295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.170420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.170446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.170592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.170621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.170779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.170807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.170957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.170983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.171102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.171128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.171332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.171361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.171527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.171556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.171746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.171771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.171883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.171925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.172089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.172119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.172310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.172340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.172505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.172530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.172651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.172693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.172852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.172880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.173063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.173092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.173261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.173287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.173436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.173462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.173610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.173635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.173800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.173828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.173977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.174006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.174151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.174176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.174355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.174384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.174524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.174553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.174726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.174752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.174911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.174939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.175107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.175134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.175291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.175320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.175482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.887 [2024-11-02 11:47:16.175508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.887 qpair failed and we were unable to recover it. 00:35:15.887 [2024-11-02 11:47:16.175624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.175649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.175852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.175880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.176040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.176070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.176244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.176288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.176437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.176463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.176673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.176702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.176877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.176901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.177048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.177073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.177243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.177278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.177436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.177464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.177645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.177670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.177789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.177815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.177968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.177993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.178110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.178135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.178310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.178339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.178486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.178511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.178660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.178825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.178853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.179001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.179034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.179202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.179228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.179360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.179385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.179535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.179561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.179679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.179704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.179854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.179879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.180017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.180045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.180216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.180241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.180420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.180461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.180656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.180682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.180843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.180872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.181050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.181075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.181245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.181276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.181463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.181669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.181697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.181861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.181889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.182052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.182080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.182248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.182279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.182407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.182432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.182573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.182602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.182754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.182782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.182950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.182976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.888 qpair failed and we were unable to recover it. 00:35:15.888 [2024-11-02 11:47:16.183173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.888 [2024-11-02 11:47:16.183201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.183374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.183403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.183589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.183804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.183829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.183983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.184008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.184124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.184149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.184337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.184366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.184501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.184526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.184681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.184722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.184879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.184907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.185068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.185267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.185293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.185482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.185510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.185747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.185775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.185931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.185960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.186103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.186128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.186242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.186272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.186477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.186505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.186661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.186689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.186866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.186892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.187090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.187119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.187315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.187344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.187498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.187526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.187688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.187713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.187883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.187926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.188083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.188111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.188272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.188301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.188472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.188498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.188670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.188711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.188943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.188968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.189125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.189150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.189275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.189300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.189443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.189469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.189668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.189697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.189832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.189861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.190026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.190051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.190163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.190189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.190327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.190352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.190547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.190575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.190735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.190761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.190925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.190953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.191140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.889 [2024-11-02 11:47:16.191168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.889 qpair failed and we were unable to recover it. 00:35:15.889 [2024-11-02 11:47:16.191339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.191367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.191497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.191523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.191671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.191697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.191872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.191900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.192083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.192116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.192281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.192307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.192433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.192475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.192606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.192634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.192755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.192784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.192955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.192981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.193151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.193176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.193351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.193380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.193553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.193579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.193724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.193750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.193911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.193939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.194065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.194093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.194254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.194287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.194458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.194484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.194656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.194685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.194852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.194880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.195111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.195139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.195334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.195360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.195476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.195518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.195721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.195746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.195902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.195945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.196137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.196162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.196325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.196354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.196527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.196552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.196696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.196738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.196928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.196954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.197141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.197169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.197356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.197389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.197550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.197580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.197749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.197774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.197938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.197966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.198139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.198165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.198335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.890 [2024-11-02 11:47:16.198361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.890 qpair failed and we were unable to recover it. 00:35:15.890 [2024-11-02 11:47:16.198507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.198532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.198653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.198695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.198884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.198912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.199077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.199105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.199276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.199302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.199448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.199473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.199639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.199667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.199856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.199884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.200060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.200086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.200279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.200309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.200469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.200498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.200683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.200711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.200861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.200887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.201038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.201064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.201218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.201243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.201421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.201450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.201587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.201612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.201741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.201767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.201912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.201938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.202119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.202144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.202302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.202328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.202487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.202520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.202675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.202704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.202880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.202922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.203085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.203110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.203273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.203303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.203462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.203489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.203682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.203708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.203827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.203853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.203996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.891 [2024-11-02 11:47:16.204021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.891 qpair failed and we were unable to recover it. 00:35:15.891 [2024-11-02 11:47:16.204170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.204196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.204378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.204405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.204524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.204550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.204735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.204763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.204929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.204957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.205133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.205162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.205297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.205323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.205515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.205544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.205709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.205737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.205904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.205932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.206099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.206124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.206272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.206315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.206477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.206505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.206670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.206698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.206873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.206899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.207047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.207072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.207247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.207297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.207460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.207489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.207659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.207684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.207836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.207861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.207978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.208004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.208155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.208180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.208355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.208381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.208526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.208552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.208764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.208790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.208943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.208968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.209113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.209138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.209338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.209367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.209523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.209551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.209710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.209738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.209916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.209942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.210136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.210164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.210353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.210575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.210604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.210730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.210758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.210904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.210931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.211083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.211109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.211282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.211309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.892 [2024-11-02 11:47:16.211458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.892 [2024-11-02 11:47:16.211485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.892 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.211630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.211657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.211832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.211859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.212011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.212037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.212190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.212216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.212367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.212394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.212539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.212566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.212743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.212770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.212926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.212953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.213079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.213106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.213253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.213288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.213434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.213460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.213607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.213632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.213781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.213807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.213966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.213992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.214133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.214158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.214309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.214335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.214455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.214480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.214653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.214679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.214836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.214862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.215042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.215068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.215191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.215216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.215368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.215394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.215514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.215539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.215669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.215696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.215864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.215890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.216048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.216073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.216183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.216208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.216332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.216359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.216518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.216544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.216668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.216695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.216832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.216866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.217015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.217041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.217185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.217210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.217327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.217353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.217481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.217507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.217655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.217680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.217851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.217877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.893 [2024-11-02 11:47:16.218035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.893 [2024-11-02 11:47:16.218061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.893 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.218206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.218232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.218368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.218394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.218565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.218591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.218740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.218766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.218912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.218947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.219110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.219137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.219286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.219311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.219427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.219453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.219569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.219595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.219766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.219805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.219962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.219991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.220145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.220173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.220305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.220332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.220455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.220483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.220627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.220654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.220826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.220853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.221028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.221055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.221170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.894 [2024-11-02 11:47:16.221196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:15.894 qpair failed and we were unable to recover it. 00:35:15.894 [2024-11-02 11:47:16.221359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.221386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.221597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.221627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.221865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.222069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.222096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.222216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.222248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.222378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.222405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.222614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.222658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.222803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.222848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.223053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.223097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.223231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.223263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.223436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.223479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.223637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.223681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.223876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.223921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.224074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.224101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.224222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.224248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.224385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.224411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.224567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.224593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.224747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.168 [2024-11-02 11:47:16.224774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.168 qpair failed and we were unable to recover it. 00:35:16.168 [2024-11-02 11:47:16.224924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.224952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.225124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.225150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.225302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.225330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.225507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.225554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.225717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.225761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.225929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.225972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.226123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.226150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.226303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.226331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.226529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.226572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.226763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.226808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.226956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.226982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.227104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.227130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.227252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.227286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.227465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.227497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.227690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.227734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.227884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.227912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.228062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.228088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.228238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.228271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.228392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.228417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.228621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.228664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.228846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.228889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.229035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.229061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.229219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.229245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.229402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.229445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.229620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.229667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.229829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.229858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.229996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.230022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.230150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.230176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.230349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.230394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.230570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.230613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.230783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.230826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.231003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.231029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.231190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.231217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.231382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.231427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.231603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.231651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.231815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.231858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.232039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.232065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.232243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.232275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.169 [2024-11-02 11:47:16.232443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.169 [2024-11-02 11:47:16.232487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.169 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.232636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.232679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.232818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.232862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.233027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.233071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.233186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.233212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.233414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.233458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.233605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.233632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.233748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.233774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.233923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.233950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.234076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.234103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.234227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.234407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.234434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.234573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.234599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.234751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.234778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.234931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.234958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.235133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.235163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.235310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.235338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.235456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.235483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.235630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.235792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.235835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.235950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.235977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.236129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.236155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.236278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.236305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.236435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.236463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.236634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.236677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.236803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.236831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.236975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.237001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.237152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.237178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.237333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.237361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.237545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.237571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.237769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.237817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.237994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.238169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.238196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.238370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.238419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.238588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.238631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.238807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.238851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.238998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.239025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.239143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.239169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.170 [2024-11-02 11:47:16.239338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.170 [2024-11-02 11:47:16.239383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.170 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.239535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.239562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.239705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.239732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.239892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.239918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.240095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.240121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.240245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.240277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.240413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.240456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.240635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.240679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.240854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.240881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.240995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.241191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.241217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.241391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.241435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.241602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.241646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.241782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.241811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.241973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.241999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.242145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.242172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.242337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.242367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.242547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.242595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.242755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.242799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.242947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.242975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.243116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.243142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.243306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.243337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.243490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.243537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.243710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.243754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.243899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.243926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.244071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.244097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.244242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.244282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.244456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.244499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.244700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.244743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.244929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.244956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.245073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.245100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.245266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.245298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.245474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.245501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.245698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.245742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.245919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.245946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.246098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.246124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.246244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.246277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.246452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.246479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.246651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-11-02 11:47:16.246678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.171 qpair failed and we were unable to recover it. 00:35:16.171 [2024-11-02 11:47:16.246826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.246852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.247025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.247051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.247195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.247221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.247398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.247442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.247619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.247664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.247821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.247850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.248014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.248040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.248148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.248174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.248378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.248422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.248561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.248603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.248802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.248846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.248998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.249025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.249204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.249230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.249402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.249447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.249614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.249658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.249825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.249869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.250015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.250042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.250216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.250243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.250425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.250473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.250656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.250683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.250837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.250864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.251014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.251041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.251218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.251245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.251432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.251459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.251600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.251629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.251817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.251844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.251986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.252013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.252160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.252186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.252355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.252405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.252541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.252571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.252748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.252792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.252914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.252941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.253060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.253086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.253237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.253268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.253439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.253483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.253677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-11-02 11:47:16.253722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.172 qpair failed and we were unable to recover it. 00:35:16.172 [2024-11-02 11:47:16.253893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.253937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.254058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.254086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.254225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.254251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.254422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.254466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.254620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.254647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.254775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.254802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.254971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.254999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.255143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.255170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.255317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.255344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.255468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.255495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.255642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.255669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.255816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.255843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.255968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.255994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.256104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.256130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.256277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.256306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.256466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.256495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.256724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.256751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.256918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.256945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.257086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.257113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.257287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.257332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.257476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.257520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.257669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.257713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.257861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.257892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.258043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.258070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.258217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.258244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.258415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.258459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.258625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.258669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.258823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.259025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.259051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.259224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.259251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.259404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.259448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.259613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.259656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.259857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.259900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.260045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.260072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.260216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.173 [2024-11-02 11:47:16.260244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.173 qpair failed and we were unable to recover it. 00:35:16.173 [2024-11-02 11:47:16.260439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.260482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.260633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.260665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.260828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.260859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.261022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.261051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.261186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.261212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.261375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.261403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.261526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.261571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.261763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.261793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.261981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.262010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.262176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.262205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.262376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.262403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.262567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.262596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.262727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.262756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.262891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.262920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.263084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.263112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.263244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.263297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.263441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.263468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.263644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.263672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.263860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.263889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.264073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.264101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.264243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.264286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.264456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.264482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.264680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.264708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.264837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.264867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.265056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.265085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.265251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.265301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.265454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.265480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.265655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.265694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.265855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.265884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.266077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.266106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.266283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.266309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.266458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.266485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.266637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.266663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.266875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.266904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.267066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.267094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.267232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.267265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.267434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.267606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.267649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.174 qpair failed and we were unable to recover it. 00:35:16.174 [2024-11-02 11:47:16.267814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.174 [2024-11-02 11:47:16.267843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.268031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.268060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.268251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.268285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.268486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.268512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.268699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.268727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.268887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.268916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.269099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.269128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.269286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.269328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.269500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.269543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.269712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.269741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.269932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.269960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.270097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.270126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.270284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.270326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.270443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.270469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.270646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.270672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.270817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.270846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.271039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.271067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.271226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.271289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.271463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.271489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.271666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.271695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.271904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.271933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.272100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.272128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.272300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.272328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.272518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.272562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.272695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.272726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.272910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.272953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.273121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.273152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.273301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.273330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.273483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.273509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.273660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.273692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.273890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.273918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.274042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.274070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.274241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.274280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.274425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.274451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.274625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.274650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.274823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.274851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.274990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.275021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.275166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.275194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.175 [2024-11-02 11:47:16.275397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.175 [2024-11-02 11:47:16.275424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.175 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.275542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.275570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.275735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.275777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.275935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.275963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.276216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.276245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.276404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.276430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.276568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.276596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.276768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.276793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.276970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.276998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.277148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.277174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.277312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.277339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.277509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.277552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.277705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.277734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.277900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.277928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.278085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.278114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.278246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.278283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.278452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.278477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.278669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.278698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.278894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.278923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.279061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.279090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.279270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.279298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.279472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.279498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.279663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.279692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.279878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.279906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.280069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.280098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.280263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.280306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.280476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.280502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.280703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.280728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.280877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.280902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.281031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.281056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.281178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.281204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.281358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.281385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.281507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.281533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.281689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.281714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.281894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.281921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.282089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.282303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.282452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.282478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.282606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.176 [2024-11-02 11:47:16.282632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.176 qpair failed and we were unable to recover it. 00:35:16.176 [2024-11-02 11:47:16.282772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.282797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.282938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.282964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.283086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.283131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.283333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.283360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.283500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.283525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.283668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.283693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.283896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.283925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.284120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.284149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.284285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.284314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.284474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.284499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.284622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.284648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.284823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.284848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.284997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.285026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.285222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.285247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.285405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.285431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.285581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.285607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.285725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.285751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.285946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.285971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.286097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.286122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.286278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.286323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.286438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.286468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.286620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.286646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.286773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.286799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.286988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.287013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.287192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.287219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.287359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.287385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.287512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.287538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.287686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.287712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.287859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.287902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.288099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.288125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.288303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.288345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.288467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.288493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.288654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.288683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.288825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.288851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.177 qpair failed and we were unable to recover it. 00:35:16.177 [2024-11-02 11:47:16.289008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.177 [2024-11-02 11:47:16.289034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.289159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.289184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.289303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.289329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.289447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.289473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.289681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.289709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.289870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.289898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.290072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.290098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.290243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.290276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.290390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.290416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.290583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.290608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.290779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.290804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.290947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.290973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.291141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.291170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.291301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.291347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.291487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.291512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.291659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.291684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.291843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.291871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.292058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.292086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.292221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.292250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.292422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.292448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.292610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.292636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.292778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.292804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.292915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.292940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.293112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.293137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.293304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.293349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.293496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.293522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.293649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.293675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.293891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.293916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.294044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.294069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.294210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.294235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.294438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.294464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.294593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.294618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.294786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.294811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.294960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.295002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.295167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.295195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.295366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.295392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.295561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.295587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.295781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.295809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.295936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.178 [2024-11-02 11:47:16.295965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.178 qpair failed and we were unable to recover it. 00:35:16.178 [2024-11-02 11:47:16.296113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.296292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.296327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.296462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.296490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.296659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.296685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.296837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.296863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.296981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.297006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.297143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.297168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.297298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.297348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.297491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.297669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.297694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.297862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.297890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.298051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.298080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.298222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.298249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.298449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.298477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.298641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.298670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.298823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.298851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.299022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.299048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.299212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.299240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.299410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.299438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.299609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.299634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.299781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.299807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.299956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.299982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.300129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.300157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.300322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.300351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.300522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.300547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.300694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.300719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.300858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.300884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.301023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.301052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.301193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.301218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.301415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.301444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.301617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.301644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.301786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.301812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.301992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.302018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.302198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.302223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.302387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.302413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.302685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.302713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.302882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.302908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.303059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.303086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.179 qpair failed and we were unable to recover it. 00:35:16.179 [2024-11-02 11:47:16.303260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.179 [2024-11-02 11:47:16.303287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.303435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.303460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.303580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.303606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.303784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.303826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.304000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.304026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.304168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.304194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.304350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.304377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.304499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.304524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.304655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.304682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.304848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.304877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.305046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.305072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.305244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.305275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.305395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.305421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.305613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.305638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.305794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.305820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.305969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.306013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.306202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.306230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.306430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.306457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.306615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.306640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.306838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.306866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.307009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.307039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.307209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.307238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.307447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.307472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.307646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.307795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.307821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.307933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.307958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.308107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.308267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.308293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.308467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.308684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.308710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.308825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.308851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.309001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.309032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.309197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.309222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.309375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.309401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.309569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.309597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.309748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.309773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.309926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.309951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.310125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.310153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.310287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.310317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.180 qpair failed and we were unable to recover it. 00:35:16.180 [2024-11-02 11:47:16.310483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.180 [2024-11-02 11:47:16.310508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.310637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.310662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.310814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.310840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.311008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.311038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.311205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.311231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.311401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.311430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.311629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.311657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.311793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.311822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.311985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.312010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.312182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.312208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.312331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.312357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.312513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.312541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.312679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.312705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.312861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.312887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.313032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.313058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.313225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.313253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.313399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.313424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.313629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.313657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.313824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.313852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.313987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.314020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.314180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.314206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.314372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.314401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.314532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.314560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.314695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.314725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.314893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.314921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.315091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.315119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.315284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.315311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.315456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.315482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.315653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.315678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.315868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.315897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.316024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.316054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.316217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.316246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.316415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.316441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.316640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.316668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.316851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.316879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.317043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.317071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.317210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.317235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.317369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.317396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.317564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.181 [2024-11-02 11:47:16.317593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.181 qpair failed and we were unable to recover it. 00:35:16.181 [2024-11-02 11:47:16.317785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.317810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.317927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.317952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.318108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.318133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.318279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.318305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.318508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.318536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.318703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.318729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.318892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.318920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.319103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.319131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.319333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.319360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.319513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.319539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.319739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.319768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.319899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.319927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.320086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.320115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.320320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.320346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.320520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.320558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.320729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.320754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.320908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.320933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.321106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.321132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.321276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.321302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.321509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.321534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.321702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.321731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.321925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.321951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.322124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.322153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.322350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.322376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.322494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.322520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.322715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.322741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.322899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.322928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.323044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.323072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.323236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.323285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.323454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.323480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.323661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.323689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.323876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.323904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.324110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.324135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.324247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.324278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.182 qpair failed and we were unable to recover it. 00:35:16.182 [2024-11-02 11:47:16.324469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.182 [2024-11-02 11:47:16.324497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.324688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.324717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.324889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.324914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.325056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.325082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.325246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.325283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.325454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.325483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.325676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.325709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.325850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.325875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.326006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.326032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.326206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.326249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.326452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.326645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.326671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.326862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.326891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.327092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.327118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.327225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.327271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.327444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.327469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.327631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.327659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.327845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.327873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.328037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.328065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.328253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.328453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.328481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.328679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.328707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.328875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.328901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.329059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.329084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.329281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.329310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.329495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.329524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.329713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.329741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.329905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.329930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.330101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.330129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.330278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.330307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.330458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.330487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.330649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.330675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.330827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.330868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.331024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.331052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.331211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.331240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.331450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.331475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.331606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.331632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.331803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.331831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.331985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.183 [2024-11-02 11:47:16.332013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.183 qpair failed and we were unable to recover it. 00:35:16.183 [2024-11-02 11:47:16.332182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.332207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.332364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.332393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.332582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.332788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.332816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.332990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.333016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.333211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.333239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.333415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.333444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.333607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.333635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.333809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.333836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.334030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.334058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.334221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.334278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.334413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.334441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.334594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.334619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.334736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.334762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.334923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.334951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.335109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.335137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.335325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.335352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.335517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.335555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.335714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.335742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.335876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.335905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.336100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.336126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.336293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.336321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.336518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.336680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.336709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.336849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.336874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.337026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.337052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.337212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.337240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.337409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.337435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.337580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.337606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.337771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.337806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.337994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.338023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.338187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.338216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.338394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.338420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.338561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.338590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.338724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.338752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.338909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.338937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.339126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.339152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.339313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.184 [2024-11-02 11:47:16.339476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.184 [2024-11-02 11:47:16.339505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.184 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.339658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.339687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.339860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.339885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.340063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.340088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.340207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.340233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.340435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.340478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.340628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.340654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.340805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.340832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.340984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.341027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.341198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.341223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.341376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.341402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.341554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.341598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.341782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.341810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.341972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.342001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.342143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.342170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.342352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.342397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.342562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.342591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.342751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.342779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.342948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.342974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.343138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.343181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.343351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.343379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.343539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.343567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.343736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.343762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.343932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.343960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.344123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.344151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.344307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.344336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.344525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.344551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.344728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.344756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.344910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.344938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.345078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.345107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.345306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.345332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.345489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.345514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.345710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.345736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.345846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.345871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.345993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.346019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.346164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.346208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.346401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.346430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.346610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.346635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.346751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.346777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.185 [2024-11-02 11:47:16.346920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.185 [2024-11-02 11:47:16.346945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.185 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.347142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.347169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.347303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.347332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.347531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.347557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.347726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.347754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.347906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.347935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.348073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.348098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.348260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.348286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.348436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.348478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.348657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.348683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.348825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.348851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.349060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.349085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.349218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.349247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.349456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.349482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.349609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.349635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.349808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.349833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.350030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.350059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.350200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.350228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.350436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.350463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.350612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.350638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.350761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.350790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.350980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.351126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.351268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.351485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.351686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.351834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.351971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.351997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.352151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.352176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.352297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.352323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.352449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.352477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.352629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.352654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.352779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.352805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.352916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.352942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.353117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.186 [2024-11-02 11:47:16.353146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.186 qpair failed and we were unable to recover it. 00:35:16.186 [2024-11-02 11:47:16.353288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.353314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.353465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.353491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.353642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.353669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.353840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.353865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.354033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.354059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.354210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.354235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.354372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.354398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.354595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.354816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.354841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.355011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.355039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.355167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.355196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.355332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.355361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.355557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.355587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.355760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.355789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.355976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.356004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.356168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.356196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.356337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.356363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.356492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.356518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.356668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.356695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.356869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.356911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.357078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.357104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.357278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.357308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.357468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.357497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.357658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.357686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.357884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.357909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.358078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.358106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.358243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.358294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.358471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.358496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.358610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.358823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.358851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.359009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.359037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.359222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.359250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.359427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.359452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.359602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.359627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.359773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.359798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.359947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.359989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.360137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.360163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.360347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.360374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.187 [2024-11-02 11:47:16.360584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.187 [2024-11-02 11:47:16.360610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.187 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.360798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.360826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.360991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.361017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.361214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.361242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.361382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.361410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.361574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.361603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.361769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.361966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.362163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.362366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.362392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.362535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.362561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.362754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.362783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.362930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.362955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.363126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.363167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.363314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.363340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.363505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.363552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.363712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.363740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.363893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.363921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.364072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.364099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.364251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.364284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.364439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.364466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.364623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.364649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.364797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.364997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.365023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.365168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.365194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.365375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.365402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.365533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.365560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.365742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.365768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.365923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.365955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.366094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.366120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.366268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.366296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.366462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.366488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.366662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.366688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.366808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.366835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.366987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.367013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.367161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.367188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.367348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.367375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.367520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.367546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.367720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.367746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.188 qpair failed and we were unable to recover it. 00:35:16.188 [2024-11-02 11:47:16.367922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.188 [2024-11-02 11:47:16.367948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.368093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.368119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.368301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.368328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.368492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.368519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.368693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.368719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.368870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.368897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.369020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.369047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.369219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.369245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.369397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.369568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.369594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.369741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.369767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.369946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.369972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.370127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.370154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.370302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.370329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.370479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.370505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.370691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.370717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.370867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.370894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.371041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.371067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.371239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.371276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.371434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.371461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.371601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.371791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.371817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.371965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.371991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.372163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.372190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.372344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.372371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.372518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.372544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.372688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.372714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.372862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.372888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.373066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.373092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.373216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.373248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.373404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.373430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.373579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.373605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.373749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.373776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.373949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.373976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.374099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.374126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.374266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.374440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.374466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.374596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.374623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.374794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.189 [2024-11-02 11:47:16.374819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.189 qpair failed and we were unable to recover it. 00:35:16.189 [2024-11-02 11:47:16.374963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.374991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.375163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.375190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.375348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.375522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.375549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.375728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.375755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.375898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.375923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.376097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.376123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.376270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.376298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.376479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.376506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.376656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.376835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.376862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.377009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.377035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.377209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.377235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.377415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.377441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.377593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.377620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.377793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.377820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.377968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.377995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.378181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.378208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.378355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.378382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.378508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.378535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.378649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.378675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.378827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.378853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.379023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.379050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.379222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.379249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.379412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.379439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.379567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.379594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.379722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.379920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.379946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.380070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.380096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.380245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.380277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.380453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.380479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.380602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.380630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.380780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.380807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.380969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.380994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.381115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.381141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.381263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.381291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.381471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.381497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.381627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.190 [2024-11-02 11:47:16.381653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.190 qpair failed and we were unable to recover it. 00:35:16.190 [2024-11-02 11:47:16.381804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.381830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.381978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.382005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.382183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.382210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.382378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.382404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.382566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.382741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.382769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.382895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.382921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.383093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.383119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.383300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.383327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.383486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.383512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.383691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.383718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.383843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.383870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.383987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.384014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.384189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.384216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.384346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.384373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.384492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.384518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.384637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.384663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.384834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.384860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.385039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.385066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.385241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.385277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.385433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.385459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.385608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.385634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.385788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.385814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.385990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.386016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.386190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.386216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.386373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.386400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.386522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.386549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.386695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.386722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.386850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.386877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.387027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.387052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.387227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.387253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.387416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.387443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.387625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.387652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.387805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.191 [2024-11-02 11:47:16.387831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.191 qpair failed and we were unable to recover it. 00:35:16.191 [2024-11-02 11:47:16.387957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.387983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.388156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.388183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.388330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.388357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.388504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.388530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.388689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.388715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.388860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.388887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.389037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.389063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.389182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.389209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.389340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.389367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.389541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.389567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.389711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.389739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.389888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.389914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.390047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.390073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.390224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.390266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.390419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.390445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.390601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.390627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.390798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.390825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.390969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.390995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.391117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.391145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.391302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.391329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.391449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.391476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.391630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.391656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.391809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.391834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.391956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.391984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.392164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.392190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.392341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.392373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.392517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.392544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.392664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.392690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.392842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.392868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.393035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.393061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.393233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.393264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.393376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.393402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.393556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.393583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.393734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.393761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.393878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.393904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.394084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.394110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.394264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.394462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.394489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.192 qpair failed and we were unable to recover it. 00:35:16.192 [2024-11-02 11:47:16.394612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.192 [2024-11-02 11:47:16.394638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.394816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.394842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.394990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.395016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.395157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.395183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.395311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.395337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.395491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.395517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.395668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.395694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.395845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.395872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.396046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.396073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.396225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.396252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.396379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.396405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.396585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.396612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.396759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.396785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.396937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.396963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.397115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.397141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.397300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.397326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.397476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.397503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.397653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.397679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.397829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.397856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.398028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.398055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.398209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.398236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.398407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.398434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.398596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.398623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.398767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.398794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.398938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.398965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.399090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.399116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.399279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.399306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.399458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.399488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.399645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.399671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.399786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.399813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.399966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.399992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.400134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.400160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.400335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.400362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.400502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.400528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.400690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.400715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.400860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.400886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.401031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.401056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.401217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.401244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.193 [2024-11-02 11:47:16.401412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.193 [2024-11-02 11:47:16.401438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.193 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.401582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.401609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.401754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.401780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.401936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.401962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.402109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.402136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.402301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.402327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.402451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.402477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.402625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.402652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.402830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.402857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.403031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.403057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.403180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.403206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.403363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.403390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.403567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.403593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.403732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.403758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.403904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.403930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.404075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.404101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.404237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.404272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.404424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.404450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.404564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.404590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.404718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.404744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.404909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.404936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.405080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.405106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.405261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.405288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.405406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.405433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.405552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.405579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.405807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.406004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.406029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.406154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.406181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.406336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.406363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.406514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.406544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.406694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.406720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.406896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.406922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.407080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.407106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.407278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.407304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.407530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.407556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.407703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.407729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.407879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.407904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.408070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.408096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.408235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.194 [2024-11-02 11:47:16.408267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.194 qpair failed and we were unable to recover it. 00:35:16.194 [2024-11-02 11:47:16.408418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.408445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.408573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.408599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.408742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.408768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.408918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.408944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.409095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.409121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.409350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.409376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.409500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.409527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.409709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.409736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.409884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.409910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.410057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.410083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.410254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.410286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.410440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.410465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.410640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.410666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.410789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.410815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.410933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.410959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.411101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.411128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.411304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.411331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.411493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.411519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.411635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.411833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.411858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.412006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.412032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.412156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.412182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.412333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.412360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.412537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.412564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.412679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.412705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.412853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.412880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.413033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.413060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.413228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.413254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.413405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.413432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.413584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.413610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.413757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.413787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.413958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.413984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.414101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.414129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.414311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.414337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.195 [2024-11-02 11:47:16.414480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.195 [2024-11-02 11:47:16.414507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.195 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.414665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.414691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.414834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.414860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.415009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.415036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.415212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.415238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.415416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.415442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.415594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.415621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3981589 Killed "${NVMF_APP[@]}" "$@" 00:35:16.196 [2024-11-02 11:47:16.415776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.415802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.415944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.415971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:16.196 [2024-11-02 11:47:16.416139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.416165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:16.196 [2024-11-02 11:47:16.416343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.416369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.196 [2024-11-02 11:47:16.416521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.416549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.416726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.416752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.416903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.416929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.417058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.417085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.417237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.417269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.417418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.417444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.417594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.417620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.417746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.417774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.417928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.417954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.418080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.418112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.418265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.418293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.418447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.418473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.418632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.418808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.418835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.418987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.419014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.419163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.419190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.419339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.419366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.419528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.419554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.419693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.419720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 [2024-11-02 11:47:16.419882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.419908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3982127 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:16.196 [2024-11-02 11:47:16.420071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.420098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3982127 00:35:16.196 [2024-11-02 11:47:16.420223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 [2024-11-02 11:47:16.420250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3982127 ']' 00:35:16.196 [2024-11-02 11:47:16.420421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.196 [2024-11-02 11:47:16.420447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.196 qpair failed and we were unable to recover it. 00:35:16.196 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:16.197 [2024-11-02 11:47:16.420570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.420598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.197 [2024-11-02 11:47:16.420778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:16.197 [2024-11-02 11:47:16.420805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.197 [2024-11-02 11:47:16.420924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.420950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.421096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.421122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.421284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.421311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.421486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.421513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.421636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.421663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.421790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.421816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.421949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.421980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.422098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.422125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.422309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.422336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.422511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.422538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.422663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.422689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.422845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.422871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.422996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.423024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.423164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.423190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.423351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.423378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.423553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.423578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.423735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.423762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.423921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.423950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.424140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.424170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.424339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.424366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.424505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.424535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.424749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.424779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.424965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.424995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.425157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.425184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.425374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.425404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.425582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.425611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.425763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.425794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.425960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.425986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.426138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.426166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.426339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.426369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.426546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.426576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.426760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.426789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.426986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.427012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.427189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.427216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.197 qpair failed and we were unable to recover it. 00:35:16.197 [2024-11-02 11:47:16.427382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.197 [2024-11-02 11:47:16.427413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.427564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.427609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.427761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.427787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.427965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.427991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.428166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.428192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.428342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.428373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.428536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.428566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.428759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.428786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.428940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.428967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.429118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.429145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.429329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.429358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.429535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.429565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.429781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.429812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.430041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.430068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.430260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.430303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.430472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.430502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.430739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.430768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.430906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.430933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.431108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.431134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.431271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.431316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.431479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.431509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.431703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.431732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.431904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.431930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.432079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.432105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.432222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.432249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.432441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.432470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.432693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.432721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.432884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.432910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.433073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.433099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.433245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.433277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.433452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.433481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.433726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.433752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.433901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.433926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.434048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.434074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.434200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.434226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.434387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.434415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.434572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.434600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.434764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.434791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.198 [2024-11-02 11:47:16.434966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.198 [2024-11-02 11:47:16.434992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.198 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.435147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.435173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.435352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.435379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.435495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.435523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.435680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.435708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.435854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.435880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.436021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.436047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.436198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.436225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.436437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.436465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.436665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.436693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.436879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.436908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.437078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.437105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.437288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.437315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.437467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.437493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.437613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.437645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.437769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.437797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.437967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.437993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.438137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.438165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.438318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.438345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.438516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.438542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.438691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.438718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.438863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.438890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.439035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.439061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.439234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.439266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.439423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.439449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.439622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.439648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.439824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.439850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.439977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.440003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.440182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.440209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.440326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.440353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.440521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.440699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.440726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.440904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.440930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.441053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.441081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.441205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.441232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.441389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.441593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.441619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.441769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.441797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.441951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.199 [2024-11-02 11:47:16.441977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.199 qpair failed and we were unable to recover it. 00:35:16.199 [2024-11-02 11:47:16.442126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.442152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.442319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.442346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.442492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.442519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.442673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.442699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.442844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.442870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.443041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.443067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.443216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.443244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.443397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.443423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.443573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.443600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.443725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.443753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.443903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.443929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.444106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.444133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.444268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.444295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.444453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.444480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.444619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.444646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.444767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.444801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.444974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.445001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.445124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.445152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.445308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.445336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.445484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.445510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.445664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.445690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.445811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.445837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.445980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.446007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.446155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.446182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.446307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.446335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.446480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.446507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.446682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.446709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.446864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.446890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.447041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.447067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.447219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.447246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.447401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.447428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.447575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.447601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.447715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.447741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.447886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.200 [2024-11-02 11:47:16.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.200 qpair failed and we were unable to recover it. 00:35:16.200 [2024-11-02 11:47:16.448053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.448079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.448234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.448268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.448388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.448414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.448567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.448593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.448713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.448741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.448867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.448895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.449055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.449081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.449230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.449262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.449393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.449419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.449546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.449572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.449710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.449737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.449892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.449918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.450061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.450088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.450235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.450266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.450415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.450441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.450560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.450587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.450735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.450761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.450947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.450973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.451143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.451170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.451323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.451350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.451496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.451522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.451677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.451707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.451862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.451889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.452048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.452074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.452220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.452247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.452401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.452428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.452587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.452614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.452766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.452793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.452945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.452972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.453089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.453116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.453242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.453274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.453430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.453457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.453605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.453632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.453756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.453782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.453956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.454136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.454161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.454281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.454308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.454460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.454486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.201 qpair failed and we were unable to recover it. 00:35:16.201 [2024-11-02 11:47:16.454642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.201 [2024-11-02 11:47:16.454668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.454847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.454873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.455018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.455044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.455222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.455248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.455398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.455426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.455577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.455603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.455751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.455777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.455931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.455957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.456106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.456133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.456264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.456290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.456415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.456442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.456595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.456622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.456766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.456792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.456910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.456936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.457081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.457107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.457230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.457261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.457427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.457454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.457572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.457597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.457750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.457776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.457924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.457952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.458102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.458129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.458280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.458307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.458481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.458507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.458653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.458684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.458860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.458887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.459014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.459040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.459164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.459191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.459337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.459364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.459514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.459540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.459667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.459694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.459834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.459862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.460969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.460996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.202 [2024-11-02 11:47:16.461178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.202 [2024-11-02 11:47:16.461204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.202 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.461329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.461356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.461512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.461539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.461688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.461715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.461890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.461916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.462068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.462094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.462271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.462298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.462448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.462474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.462625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.462653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.462824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.462850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.463025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.463051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.463207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.463233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.463387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.463427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.463558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.463587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.463732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.463758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.463931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.463957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.464102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.464127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.464272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.464299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.464482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.464507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.464636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.464661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.464781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.464809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.464938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.464963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.465113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.465138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.465293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.465319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.465494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.465523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.465653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.465680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.465836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.465862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.465983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.466135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.466287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.466469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.466620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.466769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.466914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.466941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.467083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.467110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.467261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.467288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.467412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.467439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.467592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.467619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.467763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.203 [2024-11-02 11:47:16.467789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.203 qpair failed and we were unable to recover it. 00:35:16.203 [2024-11-02 11:47:16.467915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.467941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.468116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.468143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.468294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.468321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.468473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.468499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.468653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.468679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.468831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.468857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.468972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.468997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.469147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.469173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.469329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.469356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.469473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.469498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.469644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.469670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.469815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.469841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.469962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.469987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.470130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.470161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.470328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.470355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.470499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.470524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.470697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.470723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.470841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.470867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.470970] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:35:16.204 [2024-11-02 11:47:16.471017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.471033] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.204 [2024-11-02 11:47:16.471042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.471176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.471200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.471354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.471380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.471498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.471524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.471680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.471705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.471845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.471871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.472036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.472214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.472405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.472561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.472700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.472869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.472991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.473017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.473191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.473217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.473375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.473400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.204 qpair failed and we were unable to recover it. 00:35:16.204 [2024-11-02 11:47:16.473549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.204 [2024-11-02 11:47:16.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.473723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.473749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.473888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.473914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.474061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.474086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.474234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.474265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.474417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.474442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.474564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.474594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.474705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.474731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.474881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.474908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.475059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.475085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.475269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.475296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.475439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.475636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.475661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.475809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.475836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.475994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.476019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.476167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.476193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.476347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.476373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.476492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.476518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.476658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.476683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.476834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.476859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.476988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.477014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.477188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.477214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.477338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.477364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.477492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.477517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.477674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.477700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.477827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.477852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.478973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.478998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.479144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.479170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.479342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.479368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.479521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.479546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.479698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.479725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.479870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.479895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.480048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.205 [2024-11-02 11:47:16.480073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.205 qpair failed and we were unable to recover it. 00:35:16.205 [2024-11-02 11:47:16.480218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.480244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.480402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.480427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.480577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.480603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.480723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.480748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.480867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.480893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.481013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.481039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.481187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.481213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.481373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.481399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.481549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.481575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.481755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.481781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.481895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.481920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.482038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.482063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.482185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.482210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.482343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.482369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.482496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.482521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.482668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.482694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.482876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.482902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.483075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.483244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.483400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.483577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.483721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.483869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.483991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.484017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.484186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.484364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.484390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.484537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.484690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.484716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.484855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.484880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.485056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.485082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.485224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.485428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.485453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.485577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.485603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.485748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.485773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.485922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.485948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.486101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.486130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.486261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.486287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.486431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.486457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.206 [2024-11-02 11:47:16.486582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.206 [2024-11-02 11:47:16.486608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.206 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.486756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.486781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.486900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.486926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.487070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.487096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.487218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.487243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.487369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.487394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.487508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.487536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.487681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.487706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.487841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.487866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.488010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.488036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.488158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.488185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.488312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.488339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.488517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.488543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.488695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.488721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.488866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.488892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.489065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.489091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.489209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.489234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.489393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.489417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.489578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.489603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.489780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.489806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.489951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.489976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.490147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.490172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.490294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.490321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.490454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.490478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.490597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.490626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.490754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.490778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.490945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.490970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.491117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.491141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.491315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.491341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.491472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.491496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.491645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.491669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.491793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.491818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.491944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.491969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.492159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.492185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.492358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.492385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.492538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.492563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.492687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.492713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.492884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.492910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.493035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.493061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.207 [2024-11-02 11:47:16.493240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.207 [2024-11-02 11:47:16.493270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.207 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.493419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.493445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.493598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.493623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.493740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.493766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.493887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.493913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.494059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.494083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.494227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.494252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.494403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.494429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.494594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.494619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.494795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.494820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.494963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.494990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.495105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.495130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.495320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.495353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.495527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.495553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.495675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.495701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.495851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.495876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.496024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.496051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.496192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.496218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.496370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.496396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.496569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.496594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.496754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.496780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.496937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.496962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.497081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.497106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.497232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.497263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.497428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.497453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.497572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.497596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.497748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.497773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.497898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.497922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.498078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.498104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.498218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.498244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.498370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.498395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.498540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.498566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.498692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.498718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.498865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.498890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.499020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.499046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.499164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.499188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.499363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.499390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.499536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.499562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.499714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.499739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.208 qpair failed and we were unable to recover it. 00:35:16.208 [2024-11-02 11:47:16.499887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.208 [2024-11-02 11:47:16.499912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.500081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.500107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.500262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.500287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.500416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.500441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.500593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.500618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.500766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.500791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.500932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.500958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.501108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.501133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.501275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.501300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.501454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.501479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.501623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.501648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.501774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.501800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.501980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.502005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.502180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.502205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.502357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.502383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.502537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.502566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.502735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.502761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.502916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.502943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.503088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.503113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.503286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.503312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.503496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.503521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.503698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.503723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.503892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.503918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.504062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.504087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.504279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.504305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.504454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.504480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.504609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.504634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.504761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.504785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.504940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.504966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.505115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.505139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.505304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.505330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.505503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.505529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.505677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.505702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.505856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.505882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.209 qpair failed and we were unable to recover it. 00:35:16.209 [2024-11-02 11:47:16.506030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.209 [2024-11-02 11:47:16.506055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.506175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.506202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.506345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.506371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.506522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.506548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.506695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.506721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.506886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.506911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.507083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.507108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.507232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.507269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.507419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.507445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.507594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.507619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.507733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.507758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.507932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.507958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.508103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.508129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.508276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.508302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.508475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.508500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.508679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.508705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.508874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.508900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.509047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.509073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.509197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.509223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.509348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.509374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.509519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.509545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.509695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.509721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.509892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.509917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.510073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.510099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.510244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.510275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.510422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.510447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.510576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.510603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.510756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.510782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.510937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.510962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.511114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.511140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.511271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.511298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.511476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.511501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.511619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.511645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.511765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.511791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.511933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.511966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.512115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.512141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.512286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.512313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.512436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.512461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.512633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.210 [2024-11-02 11:47:16.512658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.210 qpair failed and we were unable to recover it. 00:35:16.210 [2024-11-02 11:47:16.512804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.512830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.512975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.513000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.513150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.513175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.513351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.513377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.513526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.513552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.513696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.513721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.513871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.514029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.514055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.514218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.514244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.514407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.514432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.514552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.514577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.514751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.514776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.514919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.514944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.515090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.515115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.515267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.515293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.515438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.515464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.515624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.515651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.515769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.515795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.515912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.515938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.516110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.516136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.516286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.516313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.516426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.516451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.516596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.516621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.516774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.516800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.516952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.516976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.517101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.517126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.517244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.517288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.517418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.517443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.517556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.517581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.517708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.517733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.517875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.517900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.518014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.518039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.518161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.518186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.518336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.518361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.518531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.518556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.518666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.518692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.518886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.518924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.519110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.519138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.519272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.519450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.519478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.519600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.519627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.519813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.211 [2024-11-02 11:47:16.519840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.211 qpair failed and we were unable to recover it. 00:35:16.211 [2024-11-02 11:47:16.519991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.520171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.520323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.520466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.520637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.520778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.520927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.520952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.521100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.521126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.521253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.521284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.521414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.521438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.521586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.521611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.521756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.521780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.521928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.521953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.522073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.522099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.522245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.522288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.522421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.522446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.522561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.522586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.522732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.522758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.522897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.522922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.523041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.523066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.523239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.523271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.523397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.523422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.523570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.523595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.523770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.523795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.523941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.523966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.524117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.524143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.524291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.524317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.524458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.524483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.524625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.524650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.524795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.524820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.524964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.524988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.525168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.525193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.525337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.525363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.525490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.525515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.525664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.525688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.525848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.525873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.526019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.526044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.526188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.526213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.526340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.526366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.526495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.526689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.526714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.526863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.212 [2024-11-02 11:47:16.526890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.212 qpair failed and we were unable to recover it. 00:35:16.212 [2024-11-02 11:47:16.527039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.527065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.527210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.527235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.527418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.527566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.527591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.527717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.527742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.527873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.527899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.528047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.528076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.528225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.528251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.528406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.528432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.528578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.528603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.528751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.528776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.528918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.528943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.529066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.529091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.529268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.529294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.529442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.529468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.529613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.529637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.529787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.529812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.529965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.529990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.530138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.530163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.530284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.530309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.530458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.530483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.530629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.530654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.530799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.530823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.530998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.531135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.531312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.531483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.531628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.531774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.531922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.531947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.532097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.532122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.532236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.532270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.532408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.532433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.532566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.532596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.532741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.532767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.532911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.532936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.533084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.533109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.213 qpair failed and we were unable to recover it. 00:35:16.213 [2024-11-02 11:47:16.533263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.213 [2024-11-02 11:47:16.533290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.533429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.533454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.533579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.533605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.533725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.533926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.533952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.534125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.534151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.534296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.534322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.534439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.534463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.534580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.534605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.534725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.534751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.534904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.534929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.535052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.535077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.535230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.535261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.535423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.535449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.535571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.535596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.535746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.535771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.535901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.535926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.536069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.536211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.536238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.536394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.536419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.536542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.536567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.536719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.536744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.536889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.536914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.537051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.537081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.537212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.537237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.537359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.537385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.537497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.537523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.537692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.214 [2024-11-02 11:47:16.537717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.214 qpair failed and we were unable to recover it. 00:35:16.214 [2024-11-02 11:47:16.537865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.537890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.538036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.538062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.538203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.538229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.538358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.538384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.538536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.538561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.538733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.538758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.538906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.538931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.539097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.539123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.539295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.539444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.539469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.539585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.539610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.539730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.539756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.539884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.539909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.540028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.540053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.541467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.541498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.541656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.541683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.541839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.541865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.541984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.542008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.542156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.542181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.542329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.542356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.542533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.542570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.542743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.542769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.542943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.542968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.543096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.543122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.543252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.543285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.543407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.543434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.543582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.543607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.543751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.543777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.543900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.543927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.544077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.544103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.544252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.544285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.544432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.544458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.544610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.544635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.544770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.544795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.544967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.544993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.545120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.545144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.545288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.545329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.545455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.545483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.545615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.215 [2024-11-02 11:47:16.545643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.215 qpair failed and we were unable to recover it. 00:35:16.215 [2024-11-02 11:47:16.545819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.545846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.545991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.546017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.546160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.546187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.546349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.546377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.546526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.546554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.546679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.546706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.546854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.546880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.547024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.547049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.547976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.548151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.548333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.548485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.548632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.548768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.548914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.548941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.549111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.549137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.549281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.549307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.549450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.549475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.549614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.549640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.549765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.549791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.549936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.549961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.550134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.550180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.550351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.550380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.550529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.550556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.550704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.550744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.550873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.550900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.551093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.551123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.551283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.551312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.551461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.551487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.551641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.551667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.551817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.551843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.551968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.551994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.552169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.552196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.552366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.552394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.552522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.552566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.552730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:16.216 [2024-11-02 11:47:16.552740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.552776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.554167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.554213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.216 [2024-11-02 11:47:16.554375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.216 [2024-11-02 11:47:16.554407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.216 qpair failed and we were unable to recover it. 00:35:16.217 [2024-11-02 11:47:16.554563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.217 [2024-11-02 11:47:16.554591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.217 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.554758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.554786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.554942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.554969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.555120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.555148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.555314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.555342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.555497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.555523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.555674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.555704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.555875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.555902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.556072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.556099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.556245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.556281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.556411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.556439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.556564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.556590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.556760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.556791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.556965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.556991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.557117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.557144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.557278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.500 [2024-11-02 11:47:16.557306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.500 qpair failed and we were unable to recover it. 00:35:16.500 [2024-11-02 11:47:16.557465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.557491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.557672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.557699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.557849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.557876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.557996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.558024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.558178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.558205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.558370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.558399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.558522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.558549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.558696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.558722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.558842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.558870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.559024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.559051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.559181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.559208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.559377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.559405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.559600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.559627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.559776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.559804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.559977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.560004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.560184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.560211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.560380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.560407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.561306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.561340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.561478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.561506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.561662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.561690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.561846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.561874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.562051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.562077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.562225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.562261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.562400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.562427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.562587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.562615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.563437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.563469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.563638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.563665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.563831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.563857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.564287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.564332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.564460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.564488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.564664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.564707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.565047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.565077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.565423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.565451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.565570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.565598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.565746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.565772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.565959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.565986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.566147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.566176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.566329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.566357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.501 qpair failed and we were unable to recover it. 00:35:16.501 [2024-11-02 11:47:16.566506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.501 [2024-11-02 11:47:16.566532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.566690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.566717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.566881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.566910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.567034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.567061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.567193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.567220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.567360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.567388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.567545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.567582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.567736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.567763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.567939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.567965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.568113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.568140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.568291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.568320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.568454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.568481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.568631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.568659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.568820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.568847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.569695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.569734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.569943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.569971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.570123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.570149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.570342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.570369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.570497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.570525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.570686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.570724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.570881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.570907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.571057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.571083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.571275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.571302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.571451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.571477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.571597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.571623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.571772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.571803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.571949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.571976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.572096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.572124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.572306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.572334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.572478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.572506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.572687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.572714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.572855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.572882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.573032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.573206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.573400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.573541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.573685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.502 [2024-11-02 11:47:16.573825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.502 qpair failed and we were unable to recover it. 00:35:16.502 [2024-11-02 11:47:16.573978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.574005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.574165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.574192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.574377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.574408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.574531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.574558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.574675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.574701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.574846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.574872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.575964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.575991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.576136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.576178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.576322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.576351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.576468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.576493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.576682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.576855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.576881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.577050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.577076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.577199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.577225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.577387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.577428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.577558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.577587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.577761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.577788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.577909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.577936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.578115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.578142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.578284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.578311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.578434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.578461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.578589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.578621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.578804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.578830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.578951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.578979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.579100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.579127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.579292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.579319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.579441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.579468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.579588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.579615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.579739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.579914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.579942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.580058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.580086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.581069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.581100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.503 [2024-11-02 11:47:16.581276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.503 [2024-11-02 11:47:16.581305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.503 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.582047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.582076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.582271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.582300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.582435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.582464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.582589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.582616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.582748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.582775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.582930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.582957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.583733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.583771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.583984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.584936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.584978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.585221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.585251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.585385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.585412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.585573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.585600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.585769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.585796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.585945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.585972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.586737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.586764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.586938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.586965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.587120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.587147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.587309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.587336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.587490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.587517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.587640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.587667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.588704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.588734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.588929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.588956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.589083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.589113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.589282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.589310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.589466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.589492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.589642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.589668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.589798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.589824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.590001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.590028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.590163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.590194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.590361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.590389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.590512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.590540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.590692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.590718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.590873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.590901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.591078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.591105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.591266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.591294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.591451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.591478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.591654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.591681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.592649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.504 [2024-11-02 11:47:16.592678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.504 qpair failed and we were unable to recover it. 00:35:16.504 [2024-11-02 11:47:16.592894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.592921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.593060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.593095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.593286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.593325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.593480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.593517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.593712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.593910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.593939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.594070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.594097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.594260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.594288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.594439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.594467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.594634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.594661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.594839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.594865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.595015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.595045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.595168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.595195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.595339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.595366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.595487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.595515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.595663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.595689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.595847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.595873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.596030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.596057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.596206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.596232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.596385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.596411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.596554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.596582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.596768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.596795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.596938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.596964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.597092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.597120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.597268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.597299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.597423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.597452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.597623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.597650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.597788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.597815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.597964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.597991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.598142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.598169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.598321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.598354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.598482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.505 [2024-11-02 11:47:16.598509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.505 qpair failed and we were unable to recover it. 00:35:16.505 [2024-11-02 11:47:16.598661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.598688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.598820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.598847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.598971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.598999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.599145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.599172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.599932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.599960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.600218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.600245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.600488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.600515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.600699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.600725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.600845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.600873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.601023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.601050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.601171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.601197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.601353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.601393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.601527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.601555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.601737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.601764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.601915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.601941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.602061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.602087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.602211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.602237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.602392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.602418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.602562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.602588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.602710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.602737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.602868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.602894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.603041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.603209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.603236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.603392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.603419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.603543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.603573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.603720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.603751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.603928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.603953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.604073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.604099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.604266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.604292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.604422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.604448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.604574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.604599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.604714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.604747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.604866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.604892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.605004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.605030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.605150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.605175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.605291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.605318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.605438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.605463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.605575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.605601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.506 [2024-11-02 11:47:16.605725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.506 [2024-11-02 11:47:16.605749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.506 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.605900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.605925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.606035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.606062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.606203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.606244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.606349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.507 [2024-11-02 11:47:16.606382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-11-02 11:47:16.606379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 at runtime. 00:35:16.507 [2024-11-02 11:47:16.606401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.507 [2024-11-02 11:47:16.606407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 [2024-11-02 11:47:16.606413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.606424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.507 [2024-11-02 11:47:16.606532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.606559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.606682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.606710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.606857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.606884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.607030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.607057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.607234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.607272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.607399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.607424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.607550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.607582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.607741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.607773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.607901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.607928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.608080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.608107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.608069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:16.507 [2024-11-02 11:47:16.608188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:16.507 [2024-11-02 11:47:16.608267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.608293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.608271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:16.507 [2024-11-02 11:47:16.608276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:16.507 [2024-11-02 11:47:16.608430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.608456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.608584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.608609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.608734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.608759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.608905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.608930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.609081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.609105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.609219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.609245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.609370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.609395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.609563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.609589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.609764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.609789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.609939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.609964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.610086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.610111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.610244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.610289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.610404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.610430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.610543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.610571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.610714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.610739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.610896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.610921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.611046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.611070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.611247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.611283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.611396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.611421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.611541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.507 [2024-11-02 11:47:16.611577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.507 qpair failed and we were unable to recover it. 00:35:16.507 [2024-11-02 11:47:16.611721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.611746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.611862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.611888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.612058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.612100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.612231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.612275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.612426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.612453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.612573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.612612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.612728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.612763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.612942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.612968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.613112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.613305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.613332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.613462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.613488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.613609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.613636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.613770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.613797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.613951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.613978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.614092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.614119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.614294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.614329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.614475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.614515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.614656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.614684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.614804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.614833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.614952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.614978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.615096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.615120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.615268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.615295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.615415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.615440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.615557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.615583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.615698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.615724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.615872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.615898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.616926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.616970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.617122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.617148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.617297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.617323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.617449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.617473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.617667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.617693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.617804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.617829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.508 [2024-11-02 11:47:16.617945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.508 [2024-11-02 11:47:16.617970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.508 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.618076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.618101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.618227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.618269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.618386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.618536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.618573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.618696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.618722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.618847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.618873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.619029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.619055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.619171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.619197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.619329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.619357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.619482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.619508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.619656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.619681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.619799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.619842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.620039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.620188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.620338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.620487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.620827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.620983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.621129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.621274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.621417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.621584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.621766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.621954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.621980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.622099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.622126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.622250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.622288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.622406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.622432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.622558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.622595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.622715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.622740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.622896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.622920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.623036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.623061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.623180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.623205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.509 [2024-11-02 11:47:16.623373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.509 [2024-11-02 11:47:16.623415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.509 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.623551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.623589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.623711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.623738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.623865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.623891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.624082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.624109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.624232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.624268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.624396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.624427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.624591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.624618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.624745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.624772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.624927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.624953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.625139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.625170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.625325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.625367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.625500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.625539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.625674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.625702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.625833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.625859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.625977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.626121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.626270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.626419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.626575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.626748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.626900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.626925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.627108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.627134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.627252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.627285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.627430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.627455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.627589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.627614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.627742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.627768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.627902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.627927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.628075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.628100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.628212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.628237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.628374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.628401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.628518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.628554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.628717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.628745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.628884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.628911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.629056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.629082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.629229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.629266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.629390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.629417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.629543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.629580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.629727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.629761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.510 qpair failed and we were unable to recover it. 00:35:16.510 [2024-11-02 11:47:16.629934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.510 [2024-11-02 11:47:16.629961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.630101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.630127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.630319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.630345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.630494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.630520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.630655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.630680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.630814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.630839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.630967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.630993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.631111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.631145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.631285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.631313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.631459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.631485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.631610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.631635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.631763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.631788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.631947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.631972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.632095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.632121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.632227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.632263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.632389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.632414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.632572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.632598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.632717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.632742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.632866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.632892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.633017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.633043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.633164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.633189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.633313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.633340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.633485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.633511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.633696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.633722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.633843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.633871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.634951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.634975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.635150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.635176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.635312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.635338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.635461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.635594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.635620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.635733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.635758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.635897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.635923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.636041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.511 [2024-11-02 11:47:16.636066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.511 qpair failed and we were unable to recover it. 00:35:16.511 [2024-11-02 11:47:16.636196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.636222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.636376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.636417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.636537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.636572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.636713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.636740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.636857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.636884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.637060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.637088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.637216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.637244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.637401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.637428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.637551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.637577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.637697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.637724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.637838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.637865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.638042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.638067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.638192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.638219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.638378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.638407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.638561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.638602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.638744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.638772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.638901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.638933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.639065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.639092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.639220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.639248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.639379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.639406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.639526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.639554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.639709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.639736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.639868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.639894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.640010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.640037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.640161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.640188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.640326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.640355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.640503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.640531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.640665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.640698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.640865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.640892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.641968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.641995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.642114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.642139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.642279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.642306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.512 [2024-11-02 11:47:16.642422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.512 [2024-11-02 11:47:16.642449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.512 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.642569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.642601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.642745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.642772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.642893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.642919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.643068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.643095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.643216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.643243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.643377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.643404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.643519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.643554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.643690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.643717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.643847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.643874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.644022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.644048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.644195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.644222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.644349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.644376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.644500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.644527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.644702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.644729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.644850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.644876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.645060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.645091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.645227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.645270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.645388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.645414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.645539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.645577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.645723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.645749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.645875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.645902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.646049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.646204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.646391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.646561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.646713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.646857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.646974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.647001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.647168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.647200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.647347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.647375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.647500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.647681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.647707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.647845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.647873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.647997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.648024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.648172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.648198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.648337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.648364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.648476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.648502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.648636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.513 [2024-11-02 11:47:16.648663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.513 qpair failed and we were unable to recover it. 00:35:16.513 [2024-11-02 11:47:16.648845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.648879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.649003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.649029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.649155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.649182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.649327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.649355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.649507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.649533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.649721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.649748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.649856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.649882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.650007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.650033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.650145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.650172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.650349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.650390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.650570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.650602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.650730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.650756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.650882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.650914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.651948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.651975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.652102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.652130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.652247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.652426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.652576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.652604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.652723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.652749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.652906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.652932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.653085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.653112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.653270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.653297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.653443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.653472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.653655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.653682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.653803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.653835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.653964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.653992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.654115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.654143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.654271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.514 [2024-11-02 11:47:16.654298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.514 qpair failed and we were unable to recover it. 00:35:16.514 [2024-11-02 11:47:16.654423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.654450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.654574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.654601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.654731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.654758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.654880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.654907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.655019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.655046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.655168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.655194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.655382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.655425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.655581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.655609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.655749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.655775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.655915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.655940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.656962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.656989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.657111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.657138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.657273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.657299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.657480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.657505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.657643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.657680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.657796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.657822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.657951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.657976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.658124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.658159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.658294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.658320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.658447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.658473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.658597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.658623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.658748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.658774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.658889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.658915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.659968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.659994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.660175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.660199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.660349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.660374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.515 [2024-11-02 11:47:16.660488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.515 [2024-11-02 11:47:16.660514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.515 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.660647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.660672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.660801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.660827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.661028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.661181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.661354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.661501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.661654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.661980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.662006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.662138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.662163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.662313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.662340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.662480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.662505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.662690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.662715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.662854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.662880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.663959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.663985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.664138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.664165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.664307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.664333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.664454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.664480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.664603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.664641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.664792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.664817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.664931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.664957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.665132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.665290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.665440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.665589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.665739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.665997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.666036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.666182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.666207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.666347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.666374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.666552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.666582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.666699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.666724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.516 [2024-11-02 11:47:16.666858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.516 [2024-11-02 11:47:16.666883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.516 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.667955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.667980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.668107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.668132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.668252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.668285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.668456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.668481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.668604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.668630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.668800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.668826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.668961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.668985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.669097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.669127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.669247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.669280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.669400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.669426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.669558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.669593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.669714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.669750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.669902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.669928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.670945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.670970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.671111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.671136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.671269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.671296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.671443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.671469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.671658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.671684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.671816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.671840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.671948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.671974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.672088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.672112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.672244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.672278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.672400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.672425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.672573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.672598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.672715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.672740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.672857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.672883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.517 [2024-11-02 11:47:16.673007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.517 [2024-11-02 11:47:16.673034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.517 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.673183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.673209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.673337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.673366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.673497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.673522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.673638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.673663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.673811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.673837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.673964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.673990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.674143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.674168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.674301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.674328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.674456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.674481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.674616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.674645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.674765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.674791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.674916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.674942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.675070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.675095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.675210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.675237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.675362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.675387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.675537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.675563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.675718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.675743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.675916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.675943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.676059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.676084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.676200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.676226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.676353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.676380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.676555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.676589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.676739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.676764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.676881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.676908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.677933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.677957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.678081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.678106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.678230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.678254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.518 qpair failed and we were unable to recover it. 00:35:16.518 [2024-11-02 11:47:16.678389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.518 [2024-11-02 11:47:16.678414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.678591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.678624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.678772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.678797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.678923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.678949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.679073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.679098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.679241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.679299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.679413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.679439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.679556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.679585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.679732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.679757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.679874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.679898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.680041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.680209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.680375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.680525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.680672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.680808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.680986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.681138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.681321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.681467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.681629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.681777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.681922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.681947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.682109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.682252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.682545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.682719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.682866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.682990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.683128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.683271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.683429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.683576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.683715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.683866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.683890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.684035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.684061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.684187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.684212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.684341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.519 [2024-11-02 11:47:16.684367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.519 qpair failed and we were unable to recover it. 00:35:16.519 [2024-11-02 11:47:16.684483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.684509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.684642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.684667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.684779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.684804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.684939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.684964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.685085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.685111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.685236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.685267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.685394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.685420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.685561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.685587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.685697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.685722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.685883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.685909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.686105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.686131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.686270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.686296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.686433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.686458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.686576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.686601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.686715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.686740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.686871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.686897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.687071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.687097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.687219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.687243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.687411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.687436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.687583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.687737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.687763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.687911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.688066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.688091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.688211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.688235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.688375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.688404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.688555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.688581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.688786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.688812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.688967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.688991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.689132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.689157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.689282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.689313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.689444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.689469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.689605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.689630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.689779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.689805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.689924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.689948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.690067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.690093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.690205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.690230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.690355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.690380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.690502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.690527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.690680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.520 [2024-11-02 11:47:16.690705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.520 qpair failed and we were unable to recover it. 00:35:16.520 [2024-11-02 11:47:16.690823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.690849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.690989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.691152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.691309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.691455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.691618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.691763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.691903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.691929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.692084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.692250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.692404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.692546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.692713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.692867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.692997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.693138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.693289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.693471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.693616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.693782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.693918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.693944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.694065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.694090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.694238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.694269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.694395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.694419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.694547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.694572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.694710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.694735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.694866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.694892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.695922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.695947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.696070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.696095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.696237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.696269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.696385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.696410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.696529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.696554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.696666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.521 [2024-11-02 11:47:16.696690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.521 qpair failed and we were unable to recover it. 00:35:16.521 [2024-11-02 11:47:16.696811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.696835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.696985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.697139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.697315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.697458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.697604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.697772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.697941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.697967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.698106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.698131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.698252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.698284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.698399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.698425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.698578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.698604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.698743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.698768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.698909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.698934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.699060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.699086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.699271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.699297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.699447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.699592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.699618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.699737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.699762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.699876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.699901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.700041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.700189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.700523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.700701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.700849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.700999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.701137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.701327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.701478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.701612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.701757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.701934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.701960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.702080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.702105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.702217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.702242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.702372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.702397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.702517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.702543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.702692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.702717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.522 [2024-11-02 11:47:16.702843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.522 [2024-11-02 11:47:16.702868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.522 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.702979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.703149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.703318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.703491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.703651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.703787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.703926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.703951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.704130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.704276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.704415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.704559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.704728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.704864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.704976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.705125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.705306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.705452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.705622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.705766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.705927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.705952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.706075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.706101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.706290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.706316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.706446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.706472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.706605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.706635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.706760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.706786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.706897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.706923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.707065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.707209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.707402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.707550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.707699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.707867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.707983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.708007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.523 qpair failed and we were unable to recover it. 00:35:16.523 [2024-11-02 11:47:16.708138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.523 [2024-11-02 11:47:16.708163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.708281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.708307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.708449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.708474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.708595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.708620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.708746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.708771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.708888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.708913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.709040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.709065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.709211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.709237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.709363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.709388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.709513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.709538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.709669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.709695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.709817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.709842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.710018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.710054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.710202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.710247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.710441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.710478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.710631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.710664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.710806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.710837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.710961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.710993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.711139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.711168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.711337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.711512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.711553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.711698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.711728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.711886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.711918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.712061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.712104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.712299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.712331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.712464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.712495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.712665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.712698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.712827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.712858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.712988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.713019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.713143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.713175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.713321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.713356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.713486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.713517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.713698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.713729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.713908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.713942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.714107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.714139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.714312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.714345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.714549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.714719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.714752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.714917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.714951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.524 qpair failed and we were unable to recover it. 00:35:16.524 [2024-11-02 11:47:16.715093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.524 [2024-11-02 11:47:16.715127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.715300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.715336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.715481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.715515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.715691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.715726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.715863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.715898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.716071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.716105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.716242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.716282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.716437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.716472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddc690 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.716669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.716710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.716861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.716889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.717038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.717190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.717364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.717513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.717683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.717826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.717996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.718022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.718167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.718193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.718352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.718380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.718506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.718533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.718651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.718677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.718822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.718849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.718979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.719005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.719158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.719184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.719314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.719341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.719499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.719526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.719652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.719678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.719802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.719830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.719981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.720157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.720311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.720451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.720626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.720804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.720954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.720980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.721123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.721150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.721296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.721324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.721448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.721473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.721596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.525 [2024-11-02 11:47:16.721628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.525 qpair failed and we were unable to recover it. 00:35:16.525 [2024-11-02 11:47:16.721757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.721784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.721930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.721955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.722103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.722130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.722248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.722281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.722410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.722436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.722566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.722592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.722735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.722762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.722888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.722914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.723058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.723084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.723203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.723230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.723383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.723411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.723536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.723562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.723690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.723717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.723874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.723901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.724011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.724038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.724210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.724236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.724373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.724401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.724520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.724547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.724677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.724703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.724858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.724884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.725007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.725033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.725164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.725190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.725371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.725398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.725523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.725549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.725695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.725721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.725876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.725904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.726930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.726956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.727127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.727153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.727266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.727293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.727415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.727441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.727582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.727608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.727758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.526 [2024-11-02 11:47:16.727785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.526 qpair failed and we were unable to recover it. 00:35:16.526 [2024-11-02 11:47:16.727987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.728159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.728308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.728470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.728647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.728803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.728972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.728998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.729114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.729141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:16.527 [2024-11-02 11:47:16.729298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.729326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.729439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.729465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.527 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.527 [2024-11-02 11:47:16.729622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.729649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.527 [2024-11-02 11:47:16.729768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.729794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.729905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.729935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.730942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.730968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.731115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.731141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.731299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.731326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.731442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.731470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.731620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.731646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.731780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.731808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.731943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.731971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.732130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.732167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.732296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.732323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.732445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.732472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.732624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.732650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.732766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.732793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.527 [2024-11-02 11:47:16.732916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.527 [2024-11-02 11:47:16.732945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.527 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.733091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.733118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.733272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.733300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.733425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.733451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.733570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.733597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.733735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.733761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.733877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.733913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.734066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.734093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.734232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.734283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.734405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.734432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.734575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.734601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.734729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.734757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.734911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.734938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.735072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.735098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.735215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.735246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.735393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.735420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.735566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.735593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.735736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.735762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.735908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.735934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.736079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.736105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.736236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.736271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.736390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.736416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.736537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.736563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.736683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.736711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.736837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.736863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.737916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.737942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.738090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.738285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.738423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.738576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.738725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.738884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.738995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.528 [2024-11-02 11:47:16.739023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.528 qpair failed and we were unable to recover it. 00:35:16.528 [2024-11-02 11:47:16.739137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.739164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.739318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.739345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.739496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.739523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.739648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.739675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.739791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.739818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.739957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.739983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.740109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.740136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.740284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.740311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.740432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.740459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.740571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.740602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.740742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.740795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.740926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.741116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.741269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.741296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.741447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.741473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.741590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.741616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.741763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.741790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.741913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.741939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.742118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.742145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.742267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.742294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.742405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.742431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.742591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.742618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.742766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.742792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.742945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.742972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.743086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.743113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.743267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.743294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.743407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.743433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.743564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.743590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.743734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.743760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.743880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.743907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.744036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.744065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.744177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.744204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.744374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.744402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.744518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.744707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.744733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.744859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.744886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.745040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.745067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.745187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.745214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.529 qpair failed and we were unable to recover it. 00:35:16.529 [2024-11-02 11:47:16.745343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.529 [2024-11-02 11:47:16.745370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.745494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.745520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.745640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.745667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.745789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.745816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.745989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.746016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.746171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.746197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.746360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.746387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.746502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.746529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.746689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.746715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.746830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.746856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.746980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.747006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.747151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.747181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.747358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.747385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.530 [2024-11-02 11:47:16.747501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.747529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:16.530 [2024-11-02 11:47:16.747653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.747681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.530 [2024-11-02 11:47:16.747799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.747827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.530 [2024-11-02 11:47:16.747998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.748136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.748308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.748448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.748589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.748737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.748873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.748899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.749079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.749230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.749400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.749539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.749685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.749859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.749975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.750146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.750301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.750512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.750658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.750801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.750939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.750966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.751085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.530 [2024-11-02 11:47:16.751110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.530 qpair failed and we were unable to recover it. 00:35:16.530 [2024-11-02 11:47:16.751222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.751248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.751371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.751397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.751575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.751600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.751717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.751743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.751885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.751911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.752046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.752072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.752185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.752211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.752348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.752376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.752500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.752525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.752707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.752733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.752881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.752907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.753961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.753986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.754102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.754128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.754242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.754276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.754433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.754459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.754581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.754607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.754755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.754781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.754908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.754934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.755056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.755081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.755226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.755253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.755394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.755420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.755599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.755625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.755750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.755777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.755889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.755915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.756036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.756063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.756202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.756244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.756391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.756419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.756542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.756571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.756739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.756765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.756908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.756935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.757085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.757112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.757227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.757253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.531 qpair failed and we were unable to recover it. 00:35:16.531 [2024-11-02 11:47:16.757388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.531 [2024-11-02 11:47:16.757415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.757571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.757598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.757752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.757779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.757897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.757923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.758065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.758092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.758221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.758252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.758396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.758422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.758567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.758593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.758724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.758750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.758869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.758894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.759020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.759045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.759159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.759186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.759371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.759397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.759513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.759538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.759664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.759695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.759838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.759864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.760965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.760990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.761147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.761173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.761302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.761329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.761445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.761471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.761586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.761612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.761738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.761763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.761937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.761962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.762078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.762104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.762221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.762246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.762411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.762437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.762553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.762579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.762727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.762752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.532 qpair failed and we were unable to recover it. 00:35:16.532 [2024-11-02 11:47:16.762894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.532 [2024-11-02 11:47:16.762921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.763958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.763984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.764108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.764134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.764269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.764296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.764418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.764444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.764593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.764619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.764763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.764790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.764960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.764987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.765108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.765134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.765285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.765312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.765456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.765482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.765610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.765641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.765768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.765793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.765935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.765961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.766078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.766108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.766236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.766269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.766398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.766424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.766584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.766634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.766789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.766817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.766958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.766985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.767105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.767131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.767281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.767308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.767457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.767483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.767607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.767633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.767753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.767779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.767923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.767949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.768098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.768124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.768271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.768443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.768470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.768593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.768619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.768771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.768797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.768952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.768978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.533 [2024-11-02 11:47:16.769091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.533 [2024-11-02 11:47:16.769118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.533 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.769262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.769289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.769406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.769432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.769609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.769636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.769762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.769788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.769963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.769989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.770109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.770134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.770260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.770288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.770405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.770431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.770568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.770597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.770751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.770777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.770920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.770946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.771092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.771117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.771239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.771270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.771387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.771413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.771534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.771560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.771678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.771703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.771876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.771902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.772068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.772210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.772370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.772536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.772699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.772871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.772993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.773019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.773139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.773166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.773310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.773338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.773530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.773555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.773702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.773730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.773843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.773869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.773987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.774134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.774281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.774426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.774569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.774775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.774935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.774962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.775085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.775111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.775226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.534 [2024-11-02 11:47:16.775269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.534 qpair failed and we were unable to recover it. 00:35:16.534 [2024-11-02 11:47:16.775389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.775417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.775598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.775773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.775799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.775922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.775948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.776090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.776116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.776237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.776275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.776393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.776418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.776536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.776573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.776725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.776751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.776872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.776897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.777041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.777067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.777183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.777211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.777349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.777375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.777526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.777551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.777687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.777713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.777891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.777923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.778044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.778069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.778215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.778249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.778383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.778409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.778527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.778558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.778716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.778741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.778895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.778920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.779039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.779076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.779199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.779230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.779367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.779394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.779547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.779574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.779727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.779753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.779895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.779921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.780057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.780082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.780209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.780236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.780383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.780409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.780533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.780559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.780684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.780710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.780848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.781000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.781026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.781170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.781195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.781324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.781351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.781479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.781504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.535 qpair failed and we were unable to recover it. 00:35:16.535 [2024-11-02 11:47:16.781626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.535 [2024-11-02 11:47:16.781651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.781789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.781815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.781958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.781983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.782109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.782134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.782290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.782317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.782429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.782455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.782596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.782622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.782762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.782787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.782926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.782968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.783109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.783137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.783291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.783319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.783473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.783499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.783668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.783696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.783812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.783838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.783960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.783986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.784101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.784127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.784273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.784299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.784424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.784451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.784568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.784595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.784731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.784757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.784900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.784926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.785047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.785072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.785220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.785245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.785372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.785397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.785516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.785541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 Malloc0 00:35:16.536 [2024-11-02 11:47:16.785672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.785703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.785850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.785876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.536 [2024-11-02 11:47:16.786017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.786043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:16.536 [2024-11-02 11:47:16.786167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.786194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.536 [2024-11-02 11:47:16.786347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.786373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.536 [2024-11-02 11:47:16.786497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.786523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.786656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.786682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.536 qpair failed and we were unable to recover it. 00:35:16.536 [2024-11-02 11:47:16.786792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.536 [2024-11-02 11:47:16.786817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.786942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.786970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.787116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.787141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.787303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.787329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.787453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.787481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.787633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.787659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.787808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.787834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.787982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.788128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.788313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.788453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.788602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.788777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.788938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.788964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.789113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.789139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.789277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.789303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.789340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.537 [2024-11-02 11:47:16.789420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.789444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.789567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.789592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.789712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.789737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.789853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.789879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.790917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.790943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.791089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.791233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.791404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.791542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.791715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.791864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.791983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.792010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.792137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.792163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.792280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.792306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.792423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.792449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.537 [2024-11-02 11:47:16.792562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.537 [2024-11-02 11:47:16.792588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.537 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.792712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.792740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.792887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.792912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.793038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.793065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.793197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.793223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.793343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.793369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.793499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.793527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.793657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.793683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.793811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.793838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.794939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.794965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.795115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.795140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.795266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.795293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.795414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.795439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.795577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.795603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.795725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.795751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.795894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.795920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.796079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.796105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.796229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.796254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.796376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.796401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.796532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.796673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.796698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.796822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.796848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.797002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.797027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.797170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.797195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.797332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.797359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.797485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.797511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.538 [2024-11-02 11:47:16.797641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.797669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:16.538 [2024-11-02 11:47:16.797812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.797843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.538 [2024-11-02 11:47:16.797990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.798015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.798136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.798163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.798302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.538 [2024-11-02 11:47:16.798328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.538 qpair failed and we were unable to recover it. 00:35:16.538 [2024-11-02 11:47:16.798451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.798476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.798610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.798636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.798751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.798777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.798906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.798932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.799083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.799108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.799238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.799268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.799385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.799412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.799529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.799554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.799663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.799689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.799843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.799868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.800911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.800937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.801047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.801072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.801229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.801276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.801420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.801448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.801608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.801637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.801785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.801811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.801933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.801960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.802968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.802994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.803148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.803177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.803307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.803335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.803463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.803490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.803645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.803671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.803788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.803814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.803929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.803955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.804079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.804106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.804234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.804268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.539 [2024-11-02 11:47:16.804391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.539 [2024-11-02 11:47:16.804417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.539 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.804540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.804567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.804677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.804704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.804880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.804906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.805033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.805059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.805212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.805240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.805377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.805404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.805524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.805551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.540 [2024-11-02 11:47:16.805692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:16.540 [2024-11-02 11:47:16.805719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.540 [2024-11-02 11:47:16.805865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.805896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.540 [2024-11-02 11:47:16.806038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.806064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.806204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.806230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.806363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.806391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.806512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.806547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.806667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.806693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.806847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.806873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.806985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.807128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.807275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.807419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.807568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.807741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.807912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.808091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.808269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.808421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.808560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.808699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.808838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.808985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.809130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.809276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.809433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.809576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.809717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.809877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.809903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.810061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.810090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.540 qpair failed and we were unable to recover it. 00:35:16.540 [2024-11-02 11:47:16.810242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.540 [2024-11-02 11:47:16.810273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.810390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.810416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.810541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.810567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.810678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.810703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.810832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.810858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.811952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.811977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.812019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dea630 (9): Bad file descriptor 00:35:16.541 [2024-11-02 11:47:16.812216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.812275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.812415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.812442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.812556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.812584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.812694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.812721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.812837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.812863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.812989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.813135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.813287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.813437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.541 [2024-11-02 11:47:16.813608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.541 [2024-11-02 11:47:16.813757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.541 [2024-11-02 11:47:16.813897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.813928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.541 [2024-11-02 11:47:16.814038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.814064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.814230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.814262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.814383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.814409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.814527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.814559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.814681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.814707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.814854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.814880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.814991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.815017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.541 [2024-11-02 11:47:16.815139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.541 [2024-11-02 11:47:16.815165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.541 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.815292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.815319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.815463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.815489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.815606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.815631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.815757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.815784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.815936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.815967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.816113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.816139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc0000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.816269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.816306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ccc000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.816441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.816470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.816603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.816629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.816772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.816799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.816944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.816970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.817109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.817134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.817250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.817283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.817395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.542 [2024-11-02 11:47:16.817422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cc4000b90 with addr=10.0.0.2, port=4420 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.817576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.542 [2024-11-02 11:47:16.820235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.542 [2024-11-02 11:47:16.820439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.542 [2024-11-02 11:47:16.820473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.542 [2024-11-02 11:47:16.820489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.542 [2024-11-02 11:47:16.820503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.542 [2024-11-02 11:47:16.820547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.542 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:16.542 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.542 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.542 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.542 11:47:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3981713 00:35:16.542 [2024-11-02 11:47:16.829997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.542 [2024-11-02 11:47:16.830122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.542 [2024-11-02 11:47:16.830150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.542 [2024-11-02 11:47:16.830165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.542 [2024-11-02 11:47:16.830178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.542 [2024-11-02 11:47:16.830208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.840052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.542 [2024-11-02 11:47:16.840176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.542 [2024-11-02 11:47:16.840204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.542 [2024-11-02 11:47:16.840221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.542 [2024-11-02 11:47:16.840236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.542 [2024-11-02 11:47:16.840275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.850099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.542 [2024-11-02 11:47:16.850232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.542 [2024-11-02 11:47:16.850268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.542 [2024-11-02 11:47:16.850285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.542 [2024-11-02 11:47:16.850299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.542 [2024-11-02 11:47:16.850329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.859986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.542 [2024-11-02 11:47:16.860110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.542 [2024-11-02 11:47:16.860138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.542 [2024-11-02 11:47:16.860158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.542 [2024-11-02 11:47:16.860173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.542 [2024-11-02 11:47:16.860201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.542 [2024-11-02 11:47:16.869997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.542 [2024-11-02 11:47:16.870115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.542 [2024-11-02 11:47:16.870141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.542 [2024-11-02 11:47:16.870155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.542 [2024-11-02 11:47:16.870169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.542 [2024-11-02 11:47:16.870198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.542 qpair failed and we were unable to recover it. 00:35:16.804 [2024-11-02 11:47:16.880021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.804 [2024-11-02 11:47:16.880138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.804 [2024-11-02 11:47:16.880165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.804 [2024-11-02 11:47:16.880180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.804 [2024-11-02 11:47:16.880194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.804 [2024-11-02 11:47:16.880223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.804 qpair failed and we were unable to recover it. 00:35:16.804 [2024-11-02 11:47:16.890065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.804 [2024-11-02 11:47:16.890193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.804 [2024-11-02 11:47:16.890220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.804 [2024-11-02 11:47:16.890235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.804 [2024-11-02 11:47:16.890248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.804 [2024-11-02 11:47:16.890287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.804 qpair failed and we were unable to recover it. 00:35:16.804 [2024-11-02 11:47:16.900224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.804 [2024-11-02 11:47:16.900355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.804 [2024-11-02 11:47:16.900381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.804 [2024-11-02 11:47:16.900396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.804 [2024-11-02 11:47:16.900409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.804 [2024-11-02 11:47:16.900438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.804 qpair failed and we were unable to recover it. 00:35:16.804 [2024-11-02 11:47:16.910228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.804 [2024-11-02 11:47:16.910362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.910389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.910404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.910417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.910446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.920182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.920313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.920340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.920354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.920370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.920399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.930194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.930320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.930347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.930361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.930374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.930402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.940222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.940394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.940420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.940435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.940447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.940475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.950274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.950417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.950443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.950457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.950470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.950499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.960282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.960409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.960436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.960450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.960463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.960492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.970305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.970429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.970456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.970470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.970482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.970511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.980315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.980434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.980461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.980475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.980489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.980517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:16.990355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:16.990488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:16.990515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:16.990538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:16.990552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:16.990580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:17.000385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:17.000506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:17.000532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:17.000546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:17.000559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:17.000587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:17.010439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:17.010572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:17.010598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:17.010613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:17.010626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:17.010655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:17.020427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:17.020546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:17.020571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:17.020585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:17.020598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:17.020626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:17.030464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:17.030588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:17.030614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:17.030628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.805 [2024-11-02 11:47:17.030641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.805 [2024-11-02 11:47:17.030670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.805 qpair failed and we were unable to recover it. 00:35:16.805 [2024-11-02 11:47:17.040509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.805 [2024-11-02 11:47:17.040624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.805 [2024-11-02 11:47:17.040650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.805 [2024-11-02 11:47:17.040664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.040677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.040705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.050528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.050653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.050679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.050693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.050706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.050734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.060539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.060659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.060685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.060699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.060712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.060740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.070605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.070727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.070753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.070767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.070781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.070809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.080636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.080759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.080785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.080799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.080813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.080843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.090669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.090790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.090816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.090830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.090843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.090871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.100715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.100832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.100857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.100871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.100884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.100913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.110748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.110873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.110898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.110912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.110925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.110955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.120714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.120830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.120855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.120875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.120888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.120918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.130882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.131011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.131037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.131054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.131068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.131096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.140768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.140888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.140914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.140928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.140942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.140971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.150813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.150926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.150951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.150966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.150979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.151007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.160837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.160958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.160984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.160998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.161011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.161040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.806 [2024-11-02 11:47:17.170872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.806 [2024-11-02 11:47:17.170998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.806 [2024-11-02 11:47:17.171023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.806 [2024-11-02 11:47:17.171038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.806 [2024-11-02 11:47:17.171051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.806 [2024-11-02 11:47:17.171078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.806 qpair failed and we were unable to recover it. 00:35:16.807 [2024-11-02 11:47:17.180921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.807 [2024-11-02 11:47:17.181059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.807 [2024-11-02 11:47:17.181084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.807 [2024-11-02 11:47:17.181098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.807 [2024-11-02 11:47:17.181111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.807 [2024-11-02 11:47:17.181139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.807 qpair failed and we were unable to recover it. 00:35:16.807 [2024-11-02 11:47:17.190940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.807 [2024-11-02 11:47:17.191069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.807 [2024-11-02 11:47:17.191095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.807 [2024-11-02 11:47:17.191110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.807 [2024-11-02 11:47:17.191123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.807 [2024-11-02 11:47:17.191151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.807 qpair failed and we were unable to recover it. 00:35:16.807 [2024-11-02 11:47:17.200968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:16.807 [2024-11-02 11:47:17.201107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:16.807 [2024-11-02 11:47:17.201133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:16.807 [2024-11-02 11:47:17.201147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:16.807 [2024-11-02 11:47:17.201160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:16.807 [2024-11-02 11:47:17.201189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:16.807 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.211004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.211139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.211165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.211180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.211193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.211222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.220998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.221121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.221146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.221161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.221174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.221204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.231043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.231164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.231190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.231205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.231217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.231245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.241095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.241262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.241288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.241302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.241315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.241344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.251098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.251231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.251265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.251288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.251302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.251331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.261116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.261252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.261284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.261299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.261312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.261351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.271146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.271281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.271307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.271321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.271335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.271364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.281173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.281301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.281328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.281342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.281357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.281387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.067 qpair failed and we were unable to recover it. 00:35:17.067 [2024-11-02 11:47:17.291213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.067 [2024-11-02 11:47:17.291347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.067 [2024-11-02 11:47:17.291373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.067 [2024-11-02 11:47:17.291387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.067 [2024-11-02 11:47:17.291400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.067 [2024-11-02 11:47:17.291434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.301243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.301373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.301399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.301413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.301426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.301455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.311248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.311379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.311405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.311419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.311434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.311464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.321275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.321391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.321417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.321431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.321444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.321475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.331320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.331458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.331484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.331498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.331512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.331540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.341360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.341487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.341512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.341527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.341540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.341569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.351383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.351504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.351529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.351543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.351556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.351587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.361375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.361491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.361516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.361530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.361543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.361572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.371443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.371570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.371595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.371609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.371622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.371650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.381474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.381586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.381612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.381632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.381646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.381674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.391500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.391627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.391653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.391668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.391684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.391714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.401485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.401605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.401630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.401645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.401658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.401688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.411554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.411685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.411711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.411725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.411738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.411766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.421561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.068 [2024-11-02 11:47:17.421680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.068 [2024-11-02 11:47:17.421706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.068 [2024-11-02 11:47:17.421720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.068 [2024-11-02 11:47:17.421733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.068 [2024-11-02 11:47:17.421767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.068 qpair failed and we were unable to recover it. 00:35:17.068 [2024-11-02 11:47:17.431620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.069 [2024-11-02 11:47:17.431740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.069 [2024-11-02 11:47:17.431765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.069 [2024-11-02 11:47:17.431779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.069 [2024-11-02 11:47:17.431791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.069 [2024-11-02 11:47:17.431818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.069 qpair failed and we were unable to recover it. 00:35:17.069 [2024-11-02 11:47:17.441642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.069 [2024-11-02 11:47:17.441757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.069 [2024-11-02 11:47:17.441783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.069 [2024-11-02 11:47:17.441797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.069 [2024-11-02 11:47:17.441810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.069 [2024-11-02 11:47:17.441839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.069 qpair failed and we were unable to recover it. 00:35:17.069 [2024-11-02 11:47:17.451673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.069 [2024-11-02 11:47:17.451805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.069 [2024-11-02 11:47:17.451831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.069 [2024-11-02 11:47:17.451846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.069 [2024-11-02 11:47:17.451862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.069 [2024-11-02 11:47:17.451891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.069 qpair failed and we were unable to recover it. 00:35:17.069 [2024-11-02 11:47:17.461703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.069 [2024-11-02 11:47:17.461827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.069 [2024-11-02 11:47:17.461853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.069 [2024-11-02 11:47:17.461867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.069 [2024-11-02 11:47:17.461881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.069 [2024-11-02 11:47:17.461909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.069 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.471693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.471815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.471841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.471855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.471867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.471895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.481714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.481827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.481854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.481869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.481882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.481912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.491795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.491920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.491947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.491961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.491974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.492002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.501805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.501920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.501945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.501959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.501973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.502001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.511866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.512006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.512033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.512055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.512072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.512103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.521831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.521960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.521987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.522001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.522014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.522043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.531910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.532037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.532063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.532078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.532090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.532118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.541928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.542045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.542070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.542085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.542098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.542126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.551947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.552068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.552093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.552108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.552121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.552155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.561994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.562111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.562137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.562152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.562165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.562193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.572013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.572142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.572168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.572181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.572195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.572222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.582071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.582214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.582240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.329 [2024-11-02 11:47:17.582261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.329 [2024-11-02 11:47:17.582278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.329 [2024-11-02 11:47:17.582308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.329 qpair failed and we were unable to recover it. 00:35:17.329 [2024-11-02 11:47:17.592101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.329 [2024-11-02 11:47:17.592224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.329 [2024-11-02 11:47:17.592249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.592272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.592286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.592314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.602089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.602228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.602261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.602280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.602294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.602322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.612162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.612295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.612321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.612335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.612348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.612376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.622314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.622443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.622467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.622482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.622495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.622523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.632241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.632368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.632394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.632408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.632421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.632449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.642241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.642368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.642393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.642413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.642426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.642454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.652310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.652434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.652460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.652478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.652491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.652520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.662265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.662383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.662409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.662423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.662436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.662464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.672322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.672465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.672490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.672504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.672517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.672545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.682321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.682447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.682473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.682487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.682503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.682538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.692395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.692557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.692583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.692597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.692610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.692638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.702408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.702534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.702560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.702575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.702588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.702616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.712395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.712518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.712544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.712558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.712572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.330 [2024-11-02 11:47:17.712602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.330 qpair failed and we were unable to recover it. 00:35:17.330 [2024-11-02 11:47:17.722431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.330 [2024-11-02 11:47:17.722552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.330 [2024-11-02 11:47:17.722578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.330 [2024-11-02 11:47:17.722591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.330 [2024-11-02 11:47:17.722604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.331 [2024-11-02 11:47:17.722632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.331 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.732474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.732614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.732641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.732656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.732670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.732698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.742490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.742616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.742642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.742657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.742670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.742698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.752541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.752661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.752688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.752702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.752715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.752743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.762535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.762657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.762684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.762698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.762711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.762739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.772623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.772758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.772784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.772804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.772818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.772846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.782644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.782795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.782821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.782835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.782848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.782876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.792699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.792861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.792887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.792901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.792915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.792943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.802690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.802812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.802838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.802856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.802869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.802897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.812733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.812865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.812891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.812905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.812918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.812952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.822766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.822895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.822921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.822936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.822949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.822977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.832805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.832960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.832986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.833001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.833014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.833041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.842775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.842933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.842958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.842973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.842987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.591 [2024-11-02 11:47:17.843017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.591 qpair failed and we were unable to recover it. 00:35:17.591 [2024-11-02 11:47:17.852813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.591 [2024-11-02 11:47:17.852936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.591 [2024-11-02 11:47:17.852961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.591 [2024-11-02 11:47:17.852976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.591 [2024-11-02 11:47:17.852989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.853017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.862882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.863037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.863062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.863077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.863090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.863118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.872871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.872989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.873022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.873036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.873049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.873080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.882895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.883033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.883058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.883073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.883086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.883114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.892982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.893110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.893135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.893149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.893162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.893190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.902987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.903119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.903144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.903164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.903179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.903207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.913062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.913210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.913235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.913248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.913269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.913301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.923004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.923127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.923152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.923165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.923178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.923206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.933053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.933175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.933200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.933215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.933228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.933263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.943084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.943211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.943237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.943251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.943275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.943314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.953109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.953295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.953321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.953336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.953349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.953377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.963148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.963277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.963304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.963318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.963331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.963359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.973154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.973288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.973314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.973328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.973341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.973371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.592 [2024-11-02 11:47:17.983169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.592 [2024-11-02 11:47:17.983297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.592 [2024-11-02 11:47:17.983322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.592 [2024-11-02 11:47:17.983337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.592 [2024-11-02 11:47:17.983350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.592 [2024-11-02 11:47:17.983378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.592 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:17.993240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:17.993370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:17.993397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:17.993412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:17.993425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:17.993454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.003266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.003395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.003421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:18.003440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:18.003454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:18.003484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.013346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.013498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.013524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:18.013538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:18.013551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:18.013580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.023305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.023427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.023453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:18.023467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:18.023480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:18.023509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.033351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.033475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.033502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:18.033526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:18.033542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:18.033571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.043379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.043511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.043538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:18.043552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:18.043565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:18.043593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.053418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.053542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.053567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.852 [2024-11-02 11:47:18.053581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.852 [2024-11-02 11:47:18.053594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.852 [2024-11-02 11:47:18.053623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.852 qpair failed and we were unable to recover it. 00:35:17.852 [2024-11-02 11:47:18.063429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.852 [2024-11-02 11:47:18.063551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.852 [2024-11-02 11:47:18.063576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.063590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.063603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.063632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.073495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.073615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.073641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.073656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.073669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.073702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.083521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.083641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.083667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.083681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.083694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.083723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.093550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.093680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.093706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.093727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.093742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.093771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.103548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.103684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.103711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.103725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.103739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.103766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.113572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.113695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.113720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.113735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.113748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.113777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.123632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.123761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.123786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.123801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.123814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.123842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.133647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.133808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.133833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.133847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.133860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.133888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.143649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.143764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.143789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.143803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.143816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.143845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.153684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.153806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.153832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.153846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.153860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.153888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.163716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.163832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.163858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.163879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.163893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.163923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.173760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.173894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.173920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.173935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.173948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.173977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.183811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.183931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.853 [2024-11-02 11:47:18.183956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.853 [2024-11-02 11:47:18.183971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.853 [2024-11-02 11:47:18.183983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.853 [2024-11-02 11:47:18.184011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.853 qpair failed and we were unable to recover it. 00:35:17.853 [2024-11-02 11:47:18.193809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.853 [2024-11-02 11:47:18.193930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.854 [2024-11-02 11:47:18.193955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.854 [2024-11-02 11:47:18.193970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.854 [2024-11-02 11:47:18.193983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.854 [2024-11-02 11:47:18.194010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.854 qpair failed and we were unable to recover it. 00:35:17.854 [2024-11-02 11:47:18.203872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.854 [2024-11-02 11:47:18.203990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.854 [2024-11-02 11:47:18.204015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.854 [2024-11-02 11:47:18.204029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.854 [2024-11-02 11:47:18.204042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.854 [2024-11-02 11:47:18.204076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.854 qpair failed and we were unable to recover it. 00:35:17.854 [2024-11-02 11:47:18.213889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.854 [2024-11-02 11:47:18.214026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.854 [2024-11-02 11:47:18.214053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.854 [2024-11-02 11:47:18.214074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.854 [2024-11-02 11:47:18.214088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.854 [2024-11-02 11:47:18.214118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.854 qpair failed and we were unable to recover it. 00:35:17.854 [2024-11-02 11:47:18.223882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.854 [2024-11-02 11:47:18.224009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.854 [2024-11-02 11:47:18.224035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.854 [2024-11-02 11:47:18.224049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.854 [2024-11-02 11:47:18.224061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.854 [2024-11-02 11:47:18.224090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.854 qpair failed and we were unable to recover it. 00:35:17.854 [2024-11-02 11:47:18.233923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.854 [2024-11-02 11:47:18.234036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.854 [2024-11-02 11:47:18.234061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.854 [2024-11-02 11:47:18.234075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.854 [2024-11-02 11:47:18.234088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.854 [2024-11-02 11:47:18.234116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.854 qpair failed and we were unable to recover it. 00:35:17.854 [2024-11-02 11:47:18.243977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.854 [2024-11-02 11:47:18.244094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.854 [2024-11-02 11:47:18.244120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.854 [2024-11-02 11:47:18.244134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.854 [2024-11-02 11:47:18.244147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:17.854 [2024-11-02 11:47:18.244175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:17.854 qpair failed and we were unable to recover it. 00:35:18.113 [2024-11-02 11:47:18.254095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.113 [2024-11-02 11:47:18.254275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.113 [2024-11-02 11:47:18.254302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.113 [2024-11-02 11:47:18.254317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.113 [2024-11-02 11:47:18.254330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.113 [2024-11-02 11:47:18.254359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.113 qpair failed and we were unable to recover it. 00:35:18.113 [2024-11-02 11:47:18.264096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.113 [2024-11-02 11:47:18.264206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.113 [2024-11-02 11:47:18.264233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.113 [2024-11-02 11:47:18.264247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.113 [2024-11-02 11:47:18.264267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.113 [2024-11-02 11:47:18.264297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.113 qpair failed and we were unable to recover it. 00:35:18.113 [2024-11-02 11:47:18.274047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.274219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.274244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.274266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.274281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.274310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.284090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.284210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.284237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.284251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.284273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.284302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.294130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.294280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.294306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.294326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.294340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.294370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.304121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.304238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.304270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.304285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.304299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.304327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.314148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.314272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.314298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.314312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.314325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.314352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.324199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.324372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.324397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.324411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.324425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.324453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.334212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.334389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.334414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.334428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.334440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.334474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.344254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.344383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.344408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.344422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.344436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.344464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.354267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.354387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.354413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.354427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.354439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.354467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.364285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.364404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.364430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.364445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.364458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.364486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.374374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.374524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.374550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.374564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.374577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.374605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.384384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.384543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.384569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.384583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.384596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.384624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.394422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.394543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.394569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.114 [2024-11-02 11:47:18.394584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.114 [2024-11-02 11:47:18.394597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.114 [2024-11-02 11:47:18.394625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.114 qpair failed and we were unable to recover it. 00:35:18.114 [2024-11-02 11:47:18.404387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.114 [2024-11-02 11:47:18.404509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.114 [2024-11-02 11:47:18.404534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.404548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.404561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.404589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.414439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.414562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.414587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.414601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.414614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.414641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.424485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.424627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.424657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.424672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.424684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.424712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.434494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.434621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.434648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.434666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.434679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.434708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.444525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.444646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.444675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.444689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.444703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.444731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.454577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.454753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.454779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.454793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.454807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.454835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.464554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.464672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.464697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.464711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.464723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.464757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.474616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.474746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.474771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.474784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.474796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.474824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.484632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.484752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.484777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.484791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.484805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.484833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.494697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.494821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.494846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.494860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.494873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.494902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.115 [2024-11-02 11:47:18.504684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.115 [2024-11-02 11:47:18.504850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.115 [2024-11-02 11:47:18.504876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.115 [2024-11-02 11:47:18.504894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.115 [2024-11-02 11:47:18.504907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.115 [2024-11-02 11:47:18.504935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.115 qpair failed and we were unable to recover it. 00:35:18.374 [2024-11-02 11:47:18.514696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.374 [2024-11-02 11:47:18.514814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.374 [2024-11-02 11:47:18.514840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.374 [2024-11-02 11:47:18.514854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.374 [2024-11-02 11:47:18.514868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.374 [2024-11-02 11:47:18.514897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.374 qpair failed and we were unable to recover it. 00:35:18.374 [2024-11-02 11:47:18.524758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.374 [2024-11-02 11:47:18.524891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.374 [2024-11-02 11:47:18.524917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.374 [2024-11-02 11:47:18.524932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.374 [2024-11-02 11:47:18.524945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.374 [2024-11-02 11:47:18.524975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.374 qpair failed and we were unable to recover it. 00:35:18.374 [2024-11-02 11:47:18.534810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.374 [2024-11-02 11:47:18.534937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.374 [2024-11-02 11:47:18.534963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.374 [2024-11-02 11:47:18.534978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.374 [2024-11-02 11:47:18.534991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.374 [2024-11-02 11:47:18.535020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.374 qpair failed and we were unable to recover it. 00:35:18.374 [2024-11-02 11:47:18.544819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.374 [2024-11-02 11:47:18.544941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.374 [2024-11-02 11:47:18.544966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.374 [2024-11-02 11:47:18.544981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.374 [2024-11-02 11:47:18.544993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.374 [2024-11-02 11:47:18.545022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.554838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.554979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.555010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.555026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.555039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.555068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.564940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.565090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.565116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.565131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.565143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.565171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.574901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.575029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.575054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.575068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.575081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.575109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.584896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.585019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.585044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.585059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.585072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.585100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.595015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.595133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.595158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.595173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.595187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.595221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.604945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.605057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.605082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.605096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.605110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.605138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.615052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.615178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.615203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.615217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.615231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.615269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.625048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.625220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.625247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.625272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.625287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.625315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.635042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.635166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.635192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.635206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.635219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.635247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.645071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.645203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.645229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.645243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.645263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.645294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.655144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.655322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.655347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.655362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.655375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.655403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.665115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.665233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.665265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.665281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.665294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.665322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.375 [2024-11-02 11:47:18.675158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.375 [2024-11-02 11:47:18.675283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.375 [2024-11-02 11:47:18.675310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.375 [2024-11-02 11:47:18.675325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.375 [2024-11-02 11:47:18.675338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.375 [2024-11-02 11:47:18.675367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.375 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.685176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.685314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.685345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.685360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.685373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.685403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.695210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.695344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.695370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.695384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.695397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.695425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.705304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.705442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.705467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.705481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.705495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.705522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.715265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.715402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.715427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.715441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.715454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.715482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.725311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.725450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.725477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.725496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.725511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.725551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.735368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.735493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.735519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.735533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.735547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.735576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.745360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.745481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.745506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.745520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.745533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.745562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.755368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.755485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.755511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.755525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.755538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.755567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.765408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.376 [2024-11-02 11:47:18.765527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.376 [2024-11-02 11:47:18.765552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.376 [2024-11-02 11:47:18.765566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.376 [2024-11-02 11:47:18.765579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.376 [2024-11-02 11:47:18.765607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.376 qpair failed and we were unable to recover it. 00:35:18.376 [2024-11-02 11:47:18.775490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.775656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.775683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.775698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.775712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.775740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.785496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.785629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.785656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.785671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.785684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.785712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.795492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.795606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.795632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.795646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.795659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.795688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.805517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.805638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.805663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.805678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.805691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.805719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.815657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.815792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.815823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.815838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.815851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.815879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.825608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.825725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.825750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.825764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.825777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.825808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.835590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.835703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.835728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.835742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.835755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.835784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.845624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.845743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.845770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.845784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.845797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.845824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.855673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.855807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.855833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.855851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.855864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.855899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.865713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.865831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.865857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.865872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.865885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.865913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.875724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.875844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.636 [2024-11-02 11:47:18.875870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.636 [2024-11-02 11:47:18.875884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.636 [2024-11-02 11:47:18.875897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.636 [2024-11-02 11:47:18.875927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.636 qpair failed and we were unable to recover it. 00:35:18.636 [2024-11-02 11:47:18.885726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.636 [2024-11-02 11:47:18.885881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.885908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.885922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.885935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.885962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.895792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.895920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.895945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.895960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.895973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.896001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.905794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.905913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.905939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.905953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.905966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.905993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.915851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.915987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.916013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.916027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.916039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.916068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.925890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.926019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.926044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.926058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.926071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.926100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.935906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.936029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.936054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.936068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.936081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.936109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.945925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.946056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.946087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.946102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.946115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.946143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.955923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.956037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.956063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.956077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.956090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.956118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.966047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.966160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.966185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.966200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.966212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.966241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.976002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.976134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.976159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.976173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.976187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.976215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.986026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.986155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.986181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.986196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.986217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.986246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:18.996165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:18.996315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:18.996341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:18.996356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:18.996369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:18.996397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:19.006110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:19.006230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:19.006262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:19.006278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.637 [2024-11-02 11:47:19.006292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.637 [2024-11-02 11:47:19.006321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.637 qpair failed and we were unable to recover it. 00:35:18.637 [2024-11-02 11:47:19.016094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.637 [2024-11-02 11:47:19.016214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.637 [2024-11-02 11:47:19.016240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.637 [2024-11-02 11:47:19.016254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.638 [2024-11-02 11:47:19.016274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.638 [2024-11-02 11:47:19.016306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.638 qpair failed and we were unable to recover it. 00:35:18.638 [2024-11-02 11:47:19.026112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.638 [2024-11-02 11:47:19.026228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.638 [2024-11-02 11:47:19.026253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.638 [2024-11-02 11:47:19.026275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.638 [2024-11-02 11:47:19.026289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.638 [2024-11-02 11:47:19.026317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.638 qpair failed and we were unable to recover it. 00:35:18.638 [2024-11-02 11:47:19.036206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.638 [2024-11-02 11:47:19.036378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.638 [2024-11-02 11:47:19.036405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.638 [2024-11-02 11:47:19.036420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.638 [2024-11-02 11:47:19.036433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.036462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.046191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.046326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.046353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.899 [2024-11-02 11:47:19.046367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.899 [2024-11-02 11:47:19.046380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.046409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.056267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.056409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.056434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.899 [2024-11-02 11:47:19.056449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.899 [2024-11-02 11:47:19.056462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.056491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.066272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.066393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.066418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.899 [2024-11-02 11:47:19.066432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.899 [2024-11-02 11:47:19.066446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.066474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.076253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.076381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.076411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.899 [2024-11-02 11:47:19.076426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.899 [2024-11-02 11:47:19.076440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.076468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.086300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.086460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.086486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.899 [2024-11-02 11:47:19.086501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.899 [2024-11-02 11:47:19.086514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.086543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.096344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.096513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.096538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.899 [2024-11-02 11:47:19.096552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.899 [2024-11-02 11:47:19.096565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.899 [2024-11-02 11:47:19.096593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.899 qpair failed and we were unable to recover it. 00:35:18.899 [2024-11-02 11:47:19.106356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.899 [2024-11-02 11:47:19.106483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.899 [2024-11-02 11:47:19.106509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.106523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.106536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.106564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.116413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.116580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.116605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.116619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.116637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.116666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.126419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.126540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.126565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.126580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.126593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.126621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.136448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.136574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.136600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.136614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.136627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.136655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.146449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.146570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.146595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.146610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.146622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.146650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.156558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.156715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.156740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.156754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.156767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.156796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.166542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.166657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.166683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.166697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.166710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.166737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.176568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.176696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.176722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.176736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.176749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.176778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.186559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.186680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.186706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.186720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.186732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.186763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.196587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.196704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.196729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.196743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.196756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.196783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.206702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.206846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.206878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.206893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.206906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.206934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.216721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.216874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.216899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.216913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.216926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.216954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.226658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.226783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.226808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.226822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.226835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.900 [2024-11-02 11:47:19.226863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.900 qpair failed and we were unable to recover it. 00:35:18.900 [2024-11-02 11:47:19.236722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.900 [2024-11-02 11:47:19.236843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.900 [2024-11-02 11:47:19.236869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.900 [2024-11-02 11:47:19.236883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.900 [2024-11-02 11:47:19.236899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.236929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:18.901 [2024-11-02 11:47:19.246768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.901 [2024-11-02 11:47:19.246889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.901 [2024-11-02 11:47:19.246915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.901 [2024-11-02 11:47:19.246929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.901 [2024-11-02 11:47:19.246947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.246977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:18.901 [2024-11-02 11:47:19.256761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.901 [2024-11-02 11:47:19.256899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.901 [2024-11-02 11:47:19.256925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.901 [2024-11-02 11:47:19.256939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.901 [2024-11-02 11:47:19.256952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.256983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:18.901 [2024-11-02 11:47:19.266809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.901 [2024-11-02 11:47:19.266926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.901 [2024-11-02 11:47:19.266951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.901 [2024-11-02 11:47:19.266966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.901 [2024-11-02 11:47:19.266979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.267006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:18.901 [2024-11-02 11:47:19.276808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.901 [2024-11-02 11:47:19.276939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.901 [2024-11-02 11:47:19.276965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.901 [2024-11-02 11:47:19.276979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.901 [2024-11-02 11:47:19.276992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.277020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:18.901 [2024-11-02 11:47:19.286825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.901 [2024-11-02 11:47:19.286938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.901 [2024-11-02 11:47:19.286963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.901 [2024-11-02 11:47:19.286978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.901 [2024-11-02 11:47:19.286990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.287021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:18.901 [2024-11-02 11:47:19.296937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.901 [2024-11-02 11:47:19.297108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.901 [2024-11-02 11:47:19.297134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.901 [2024-11-02 11:47:19.297148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.901 [2024-11-02 11:47:19.297161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:18.901 [2024-11-02 11:47:19.297189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:18.901 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.306986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.307143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.307170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.307185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.307198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.307226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.316929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.317048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.317075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.317089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.317102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.317130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.327084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.327220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.327245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.327269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.327284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.327313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.337024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.337157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.337188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.337203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.337216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.337244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.347017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.347139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.347165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.347179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.347191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.347219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.357098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.357275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.357302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.357316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.357328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.357356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.367081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.367218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.367244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.367267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.367284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.367312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.377158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.377312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.377337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.377352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.377375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.377405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.387133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.387268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.387294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.387308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.387322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.387352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.397192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.397322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.163 [2024-11-02 11:47:19.397348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.163 [2024-11-02 11:47:19.397362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.163 [2024-11-02 11:47:19.397375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.163 [2024-11-02 11:47:19.397404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.163 qpair failed and we were unable to recover it. 00:35:19.163 [2024-11-02 11:47:19.407172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.163 [2024-11-02 11:47:19.407293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.407318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.407332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.407347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.407375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.417316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.417438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.417464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.417478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.417491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.417519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.427246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.427376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.427402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.427418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.427431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.427459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.437297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.437424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.437449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.437462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.437474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.437502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.447363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.447487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.447513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.447527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.447540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.447568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.457356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.457479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.457504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.457518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.457531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.457559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.467425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.467593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.467624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.467639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.467652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.467680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.477428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.477554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.477578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.477591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.477604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.477631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.487485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.487608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.487635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.487656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.487671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.487700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.497484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.497618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.497644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.497659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.497671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.497700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.507525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.507649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.507675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.507689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.507707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.507736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.517515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.517636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.517662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.517676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.517690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.517718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.527535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.527646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.527672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.527686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.164 [2024-11-02 11:47:19.527699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.164 [2024-11-02 11:47:19.527727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.164 qpair failed and we were unable to recover it. 00:35:19.164 [2024-11-02 11:47:19.537570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.164 [2024-11-02 11:47:19.537695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.164 [2024-11-02 11:47:19.537721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.164 [2024-11-02 11:47:19.537735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.165 [2024-11-02 11:47:19.537748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.165 [2024-11-02 11:47:19.537775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.165 qpair failed and we were unable to recover it. 00:35:19.165 [2024-11-02 11:47:19.547615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.165 [2024-11-02 11:47:19.547742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.165 [2024-11-02 11:47:19.547767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.165 [2024-11-02 11:47:19.547781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.165 [2024-11-02 11:47:19.547798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.165 [2024-11-02 11:47:19.547827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.165 qpair failed and we were unable to recover it. 00:35:19.165 [2024-11-02 11:47:19.557656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.165 [2024-11-02 11:47:19.557790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.165 [2024-11-02 11:47:19.557820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.165 [2024-11-02 11:47:19.557837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.165 [2024-11-02 11:47:19.557850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.165 [2024-11-02 11:47:19.557882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.165 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.567710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.567834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.567860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.567875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.567889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.567917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.577686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.577805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.577831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.577846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.577859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.577888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.587763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.587885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.587911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.587926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.587938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.587966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.597721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.597837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.597870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.597884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.597897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.597925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.607779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.607909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.607935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.607949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.607962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.607990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.617804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.617922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.617947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.617961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.617973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.618001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.627872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.627996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.628022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.628036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.628049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.628077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.637971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.638099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.638124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.638138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.638157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.638185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.647941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.648056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.648082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.648096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.648108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.648137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.657966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.658089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.658115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.658130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.658143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.658170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.668025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.668150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.668177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.668197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.668213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.668244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.677963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.678103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.678129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.425 [2024-11-02 11:47:19.678143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.425 [2024-11-02 11:47:19.678156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.425 [2024-11-02 11:47:19.678185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.425 qpair failed and we were unable to recover it. 00:35:19.425 [2024-11-02 11:47:19.687994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.425 [2024-11-02 11:47:19.688109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.425 [2024-11-02 11:47:19.688135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.688151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.688167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.688196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.698035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.698157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.698183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.698197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.698210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.698238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.708087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.708225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.708250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.708273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.708286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.708316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.718121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.718246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.718280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.718295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.718308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.718336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.728122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.728239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.728277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.728298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.728311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.728340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.738178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.738314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.738344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.738360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.738374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.738403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.748176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.748320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.748347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.748361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.748375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.748405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.758219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.758349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.758375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.758390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.758404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.758432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.768248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.768421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.768447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.768461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.768479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.768508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.778297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.778427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.778452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.778467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.778480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.778508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.788334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.788496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.788524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.788538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.788551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.788579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.798325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.798454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.798479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.798494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.798507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.798535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.808368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.808510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.808536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.808550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.808563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.426 [2024-11-02 11:47:19.808592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.426 qpair failed and we were unable to recover it. 00:35:19.426 [2024-11-02 11:47:19.818467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.426 [2024-11-02 11:47:19.818604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.426 [2024-11-02 11:47:19.818630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.426 [2024-11-02 11:47:19.818644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.426 [2024-11-02 11:47:19.818657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.427 [2024-11-02 11:47:19.818685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.427 qpair failed and we were unable to recover it. 00:35:19.686 [2024-11-02 11:47:19.828422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.686 [2024-11-02 11:47:19.828554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.686 [2024-11-02 11:47:19.828581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.828595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.828608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.828639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.838472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.838588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.838614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.838628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.838641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.838669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.848475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.848612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.848638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.848652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.848665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.848695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.858551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.858678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.858711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.858726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.858739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.858767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.868649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.868771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.868797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.868811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.868824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.868852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.878558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.878676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.878702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.878716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.878728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.878757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.888691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.888812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.888838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.888856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.888870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.888898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.898654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.898774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.898800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.898814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.898833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.898862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.908645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.908762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.908788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.908802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.908815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.908843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.918676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.918816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.918841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.918855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.918868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.918896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.928692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.928811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.928837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.928851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.928864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.928892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.938751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.938872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.938898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.938912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.938925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.938954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.948789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.948907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.948933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.687 [2024-11-02 11:47:19.948947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.687 [2024-11-02 11:47:19.948960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.687 [2024-11-02 11:47:19.948988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.687 qpair failed and we were unable to recover it. 00:35:19.687 [2024-11-02 11:47:19.958816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.687 [2024-11-02 11:47:19.958935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.687 [2024-11-02 11:47:19.958961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:19.958975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:19.958988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:19.959016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:19.968848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:19.969014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:19.969041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:19.969056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:19.969073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:19.969102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:19.978907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:19.979040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:19.979066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:19.979080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:19.979093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:19.979120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:19.988883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:19.989003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:19.989034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:19.989048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:19.989061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:19.989090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:19.998914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:19.999028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:19.999054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:19.999067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:19.999080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:19.999108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.009045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.009204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.009232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.009246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.009268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.009311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.019122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.019298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.019329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.019345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.019358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.019390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.029051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.029186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.029213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.029227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.029253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.029297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.039119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.039245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.039283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.039298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.039312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.039342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.049120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.049244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.049279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.049294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.049308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.049336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.059111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.059245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.059279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.059294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.059307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.059336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.069121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.069253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.069303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.069320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.069333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.069363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.688 [2024-11-02 11:47:20.079175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.688 [2024-11-02 11:47:20.079309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.688 [2024-11-02 11:47:20.079338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.688 [2024-11-02 11:47:20.079353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.688 [2024-11-02 11:47:20.079366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.688 [2024-11-02 11:47:20.079396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.688 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.089189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.089324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.089352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.089367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.089380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.089410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.099220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.099387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.099415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.099429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.099442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.099472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.109304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.109448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.109474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.109489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.109502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.109532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.119275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.119413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.119444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.119460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.119473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.119502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.129387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.129511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.129537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.129551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.129564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.129592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.139349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.139476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.139501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.139516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.139527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.139555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.149382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.149512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.149537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.948 [2024-11-02 11:47:20.149551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.948 [2024-11-02 11:47:20.149564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.948 [2024-11-02 11:47:20.149593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.948 qpair failed and we were unable to recover it. 00:35:19.948 [2024-11-02 11:47:20.159403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.948 [2024-11-02 11:47:20.159528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.948 [2024-11-02 11:47:20.159554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.159569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.159587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.159616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.169399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.169522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.169548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.169562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.169574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.169603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.179480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.179601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.179627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.179641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.179654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.179682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.189478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.189596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.189621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.189635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.189649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.189677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.199507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.199634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.199660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.199674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.199686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.199716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.209552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.209665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.209691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.209705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.209718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.209746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.219601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.219783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.219809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.219823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.219836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.219865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.229596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.229730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.229756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.229770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.229783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.229810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.239636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.239760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.239786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.239800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.239813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.239841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.249625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.249739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.249770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.249785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.249798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.249826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.259676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.259804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.259829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.259843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.259856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.259885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.269696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.269810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.269836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.269850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.269863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.269892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.279722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.279860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.279885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.279899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.949 [2024-11-02 11:47:20.279912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.949 [2024-11-02 11:47:20.279940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.949 qpair failed and we were unable to recover it. 00:35:19.949 [2024-11-02 11:47:20.289753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.949 [2024-11-02 11:47:20.289865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.949 [2024-11-02 11:47:20.289890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.949 [2024-11-02 11:47:20.289904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.950 [2024-11-02 11:47:20.289922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.950 [2024-11-02 11:47:20.289951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.950 qpair failed and we were unable to recover it. 00:35:19.950 [2024-11-02 11:47:20.299784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.950 [2024-11-02 11:47:20.299913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.950 [2024-11-02 11:47:20.299939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.950 [2024-11-02 11:47:20.299953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.950 [2024-11-02 11:47:20.299966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.950 [2024-11-02 11:47:20.299994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.950 qpair failed and we were unable to recover it. 00:35:19.950 [2024-11-02 11:47:20.309828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.950 [2024-11-02 11:47:20.309960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.950 [2024-11-02 11:47:20.309985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.950 [2024-11-02 11:47:20.309999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.950 [2024-11-02 11:47:20.310012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.950 [2024-11-02 11:47:20.310040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.950 qpair failed and we were unable to recover it. 00:35:19.950 [2024-11-02 11:47:20.319834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.950 [2024-11-02 11:47:20.319949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.950 [2024-11-02 11:47:20.319974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.950 [2024-11-02 11:47:20.319988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.950 [2024-11-02 11:47:20.320001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.950 [2024-11-02 11:47:20.320029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.950 qpair failed and we were unable to recover it. 00:35:19.950 [2024-11-02 11:47:20.329891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.950 [2024-11-02 11:47:20.330012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.950 [2024-11-02 11:47:20.330038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.950 [2024-11-02 11:47:20.330055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.950 [2024-11-02 11:47:20.330069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.950 [2024-11-02 11:47:20.330098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.950 qpair failed and we were unable to recover it. 00:35:19.950 [2024-11-02 11:47:20.339885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.950 [2024-11-02 11:47:20.340017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.950 [2024-11-02 11:47:20.340044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.950 [2024-11-02 11:47:20.340058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.950 [2024-11-02 11:47:20.340071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:19.950 [2024-11-02 11:47:20.340099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.950 qpair failed and we were unable to recover it. 00:35:20.209 [2024-11-02 11:47:20.349914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.209 [2024-11-02 11:47:20.350035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.209 [2024-11-02 11:47:20.350062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.209 [2024-11-02 11:47:20.350076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.209 [2024-11-02 11:47:20.350089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.209 [2024-11-02 11:47:20.350118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.209 qpair failed and we were unable to recover it. 00:35:20.209 [2024-11-02 11:47:20.359947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.209 [2024-11-02 11:47:20.360063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.209 [2024-11-02 11:47:20.360089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.209 [2024-11-02 11:47:20.360104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.209 [2024-11-02 11:47:20.360117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.209 [2024-11-02 11:47:20.360145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.209 qpair failed and we were unable to recover it. 00:35:20.209 [2024-11-02 11:47:20.370009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.209 [2024-11-02 11:47:20.370130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.209 [2024-11-02 11:47:20.370156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.209 [2024-11-02 11:47:20.370170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.209 [2024-11-02 11:47:20.370183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.370211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.380080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.380203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.380234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.380249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.380269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.380299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.390008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.390125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.390150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.390165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.390178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.390206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.400056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.400190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.400216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.400230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.400243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.400282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.410170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.410283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.410309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.410323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.410337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.410367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.420150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.420324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.420350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.420365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.420383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.420413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.430127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.430250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.430282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.430297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.430310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.430339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.440178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.440303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.440329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.440343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.440355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.440382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.450173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.450289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.450315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.450329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.450342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.450372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.460227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.460372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.460397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.460411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.460424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.460451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.470231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.470358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.470384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.470398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.470410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.470438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.480355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.480472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.480497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.480510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.480522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.480550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.490314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.490453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.490480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.490499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.490514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.490544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.210 [2024-11-02 11:47:20.500402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.210 [2024-11-02 11:47:20.500528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.210 [2024-11-02 11:47:20.500554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.210 [2024-11-02 11:47:20.500568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.210 [2024-11-02 11:47:20.500581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.210 [2024-11-02 11:47:20.500611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.210 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.510371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.510489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.510520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.510534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.510548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.510575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.520394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.520511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.520536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.520550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.520563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.520591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.530417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.530532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.530557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.530571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.530583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.530611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.540465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.540592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.540617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.540631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.540644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.540672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.550499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.550628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.550654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.550668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.550685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.550714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.560498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.560618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.560643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.560657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.560670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.560698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.570613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.570722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.570747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.570762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.570775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.570803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.580576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.580742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.580768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.580782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.580795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.580822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.590689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.590810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.590835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.590850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.590863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.590892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.211 [2024-11-02 11:47:20.600629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.211 [2024-11-02 11:47:20.600749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.211 [2024-11-02 11:47:20.600775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.211 [2024-11-02 11:47:20.600789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.211 [2024-11-02 11:47:20.600802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.211 [2024-11-02 11:47:20.600830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.211 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.610674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.610796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.610823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.471 [2024-11-02 11:47:20.610837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.471 [2024-11-02 11:47:20.610850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.471 [2024-11-02 11:47:20.610879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.471 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.620752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.620882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.620909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.471 [2024-11-02 11:47:20.620923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.471 [2024-11-02 11:47:20.620936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.471 [2024-11-02 11:47:20.620965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.471 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.630754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.630876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.630901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.471 [2024-11-02 11:47:20.630915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.471 [2024-11-02 11:47:20.630928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.471 [2024-11-02 11:47:20.630956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.471 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.640790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.640920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.640952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.471 [2024-11-02 11:47:20.640967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.471 [2024-11-02 11:47:20.640979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.471 [2024-11-02 11:47:20.641008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.471 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.650830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.650958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.650985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.471 [2024-11-02 11:47:20.651002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.471 [2024-11-02 11:47:20.651016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.471 [2024-11-02 11:47:20.651044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.471 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.660804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.660925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.660951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.471 [2024-11-02 11:47:20.660965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.471 [2024-11-02 11:47:20.660978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.471 [2024-11-02 11:47:20.661009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.471 qpair failed and we were unable to recover it. 00:35:20.471 [2024-11-02 11:47:20.670834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.471 [2024-11-02 11:47:20.670959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.471 [2024-11-02 11:47:20.670984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.670998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.671010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.671038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.680872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.681001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.681027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.681051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.681066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.681094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.690871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.690987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.691012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.691026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.691039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.691068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.700945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.701072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.701097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.701111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.701123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.701151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.711051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.711171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.711197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.711211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.711224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.711252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.720989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.721113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.721139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.721153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.721165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.721195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.731005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.731123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.731148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.731162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.731175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.731202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.741040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.741160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.741185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.741199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.741212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.741241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.751069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.751192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.751217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.751230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.751243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.751279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.761090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.761212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.761238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.761252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.761272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.761301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.771091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.771219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.771249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.771279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.771294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.771322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.781138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.781326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.781352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.781366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.781379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.781408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.791273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.791406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.791431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.791444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.791457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.472 [2024-11-02 11:47:20.791486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.472 qpair failed and we were unable to recover it. 00:35:20.472 [2024-11-02 11:47:20.801189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.472 [2024-11-02 11:47:20.801329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.472 [2024-11-02 11:47:20.801358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.472 [2024-11-02 11:47:20.801373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.472 [2024-11-02 11:47:20.801387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.801416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.811218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.811356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.811382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.811403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.811417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.811445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.821248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.821389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.821414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.821429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.821443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.821473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.831310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.831433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.831458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.831473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.831486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.831515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.841311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.841441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.841466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.841480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.841493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.841521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.851347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.851462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.851488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.851502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.851515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.851543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.861381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.861541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.861567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.861581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.861594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.861623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-11-02 11:47:20.871396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-11-02 11:47:20.871525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-11-02 11:47:20.871552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-11-02 11:47:20.871583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-11-02 11:47:20.871608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.473 [2024-11-02 11:47:20.871640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.881481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.881638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.881666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.881687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.881702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.881731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.891449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.891578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.891605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.891620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.891633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.891662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.901522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.901653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.901684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.901699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.901712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.901741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.911566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.911716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.911742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.911755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.911768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.911796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.921606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.921780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.921806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.921825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.921839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.921868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.931567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.931685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.931711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.931725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.931738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.931766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.941649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.941780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.941806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.941826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.941840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.941869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.951637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.951761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.951787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.951801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.951818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.951846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.961636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.961768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.961794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.961808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.961821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.961849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.971667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.971788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.733 [2024-11-02 11:47:20.971814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.733 [2024-11-02 11:47:20.971828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.733 [2024-11-02 11:47:20.971842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.733 [2024-11-02 11:47:20.971870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.733 qpair failed and we were unable to recover it. 00:35:20.733 [2024-11-02 11:47:20.981742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.733 [2024-11-02 11:47:20.981916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:20.981942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:20.981956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:20.981969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:20.981998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:20.991753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:20.991882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:20.991908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:20.991922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:20.991935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:20.991962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.001809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.001956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.001981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.001995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.002008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.002038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.011845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.011961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.011987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.012001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.012014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.012042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.021812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.021931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.021956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.021970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.021983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.022013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.031840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.031962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.031994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.032008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.032021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.032049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.041854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.041972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.041997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.042011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.042024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.042052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.051997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.052128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.052153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.052167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.052182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.052210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.061983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.062106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.062132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.062146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.062159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.062188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.071955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.072084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.072110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.072131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.072144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.072173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.081995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.082117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.082142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.082156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.082169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.082198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.091998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.092118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.092144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.092158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.092171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.092200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.102033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.102156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.102181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.102195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.734 [2024-11-02 11:47:21.102209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.734 [2024-11-02 11:47:21.102237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.734 qpair failed and we were unable to recover it. 00:35:20.734 [2024-11-02 11:47:21.112087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.734 [2024-11-02 11:47:21.112216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.734 [2024-11-02 11:47:21.112242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.734 [2024-11-02 11:47:21.112271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.735 [2024-11-02 11:47:21.112285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.735 [2024-11-02 11:47:21.112317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.735 qpair failed and we were unable to recover it. 00:35:20.735 [2024-11-02 11:47:21.122119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.735 [2024-11-02 11:47:21.122248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.735 [2024-11-02 11:47:21.122281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.735 [2024-11-02 11:47:21.122295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.735 [2024-11-02 11:47:21.122308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.735 [2024-11-02 11:47:21.122336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.735 qpair failed and we were unable to recover it. 00:35:20.735 [2024-11-02 11:47:21.132149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.735 [2024-11-02 11:47:21.132288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.735 [2024-11-02 11:47:21.132321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.735 [2024-11-02 11:47:21.132337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.735 [2024-11-02 11:47:21.132351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.735 [2024-11-02 11:47:21.132380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.735 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.142207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.142348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.142375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.142389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.142403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.142431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.152199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.152318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.152345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.152359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.152372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.152400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.162213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.162357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.162382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.162396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.162409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.162438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.172223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.172357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.172383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.172397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.172411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.172441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.182275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.182401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.182428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.182442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.182456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.182484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.192311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.192435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.192461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.192476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.192490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.192519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.202335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.202457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.202484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.202504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.202517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.202545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.212395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.212513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.212538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.996 [2024-11-02 11:47:21.212553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.996 [2024-11-02 11:47:21.212566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.996 [2024-11-02 11:47:21.212594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.996 qpair failed and we were unable to recover it. 00:35:20.996 [2024-11-02 11:47:21.222393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.996 [2024-11-02 11:47:21.222527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.996 [2024-11-02 11:47:21.222553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.222567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.222580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.222609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.232404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.232520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.232545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.232559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.232572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.232600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.242573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.242702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.242727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.242741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.242754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.242781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.252475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.252595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.252621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.252635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.252647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.252675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.262500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.262621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.262646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.262661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.262673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.262701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.272551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.272675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.272703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.272721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.272736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.272766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.282563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.282692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.282718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.282733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.282746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.282776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.292595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.292724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.292750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.292767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.292781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.292810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.302611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.302735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.302761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.302776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.302789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.302817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.312637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.312755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.312781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.312795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.312808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.312836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.322684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.322806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.322832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.322846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.322860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.322887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.332722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.332858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.332883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.332904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.332918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.332946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.342764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.342891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.342917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.342934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.997 [2024-11-02 11:47:21.342948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.997 [2024-11-02 11:47:21.342976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.997 qpair failed and we were unable to recover it. 00:35:20.997 [2024-11-02 11:47:21.352792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.997 [2024-11-02 11:47:21.352917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.997 [2024-11-02 11:47:21.352942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.997 [2024-11-02 11:47:21.352956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.998 [2024-11-02 11:47:21.352969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.998 [2024-11-02 11:47:21.352999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.998 qpair failed and we were unable to recover it. 00:35:20.998 [2024-11-02 11:47:21.362780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.998 [2024-11-02 11:47:21.362898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.998 [2024-11-02 11:47:21.362924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.998 [2024-11-02 11:47:21.362938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.998 [2024-11-02 11:47:21.362951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.998 [2024-11-02 11:47:21.362980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.998 qpair failed and we were unable to recover it. 00:35:20.998 [2024-11-02 11:47:21.372838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.998 [2024-11-02 11:47:21.372964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.998 [2024-11-02 11:47:21.372990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.998 [2024-11-02 11:47:21.373003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.998 [2024-11-02 11:47:21.373016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.998 [2024-11-02 11:47:21.373052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.998 qpair failed and we were unable to recover it. 00:35:20.998 [2024-11-02 11:47:21.382936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.998 [2024-11-02 11:47:21.383104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.998 [2024-11-02 11:47:21.383130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.998 [2024-11-02 11:47:21.383144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.998 [2024-11-02 11:47:21.383157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.998 [2024-11-02 11:47:21.383184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.998 qpair failed and we were unable to recover it. 00:35:20.998 [2024-11-02 11:47:21.392902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.998 [2024-11-02 11:47:21.393028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.998 [2024-11-02 11:47:21.393054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.998 [2024-11-02 11:47:21.393068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.998 [2024-11-02 11:47:21.393081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:20.998 [2024-11-02 11:47:21.393110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.998 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.402938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.403062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.403089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.403104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.403118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.403146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.412955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.413115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.413140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.413155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.413168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.413196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.423003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.423132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.423158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.423173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.423185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.423214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.433050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.433200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.433226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.433240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.433254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.433293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.443015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.443134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.443160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.443173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.443185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.443213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.453048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.453173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.453199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.453214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.453227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.453262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.463106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.463236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.463268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.463295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.463309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.463337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.473131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.473274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.473299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.473313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.473328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.473358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.483105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.483221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.483246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.483266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.483280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.483308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.493191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.493310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.260 [2024-11-02 11:47:21.493336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.260 [2024-11-02 11:47:21.493350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.260 [2024-11-02 11:47:21.493363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.260 [2024-11-02 11:47:21.493391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.260 qpair failed and we were unable to recover it. 00:35:21.260 [2024-11-02 11:47:21.503179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.260 [2024-11-02 11:47:21.503316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.503341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.503355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.503369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.503403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.513205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.513340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.513366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.513380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.513392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.513423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.523267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.523415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.523441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.523455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.523468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.523496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.533334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.533449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.533475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.533489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.533501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.533528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.543303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.543477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.543503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.543517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.543531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.543559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.553365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.553495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.553520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.553535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.553548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.553577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.563347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.563471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.563496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.563510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.563523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.563553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.573385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.573500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.573525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.573539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.573551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.573579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.583414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.583540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.583566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.583580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.583592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.583621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.593442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.593561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.593586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.593608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.593622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.593650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.603490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.603615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.603640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.603654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.603668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.603696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.613489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.613625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.613651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.613665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.613679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.613706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.623524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.623647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.623672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.623686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.261 [2024-11-02 11:47:21.623699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.261 [2024-11-02 11:47:21.623727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.261 qpair failed and we were unable to recover it. 00:35:21.261 [2024-11-02 11:47:21.633532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.261 [2024-11-02 11:47:21.633650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.261 [2024-11-02 11:47:21.633676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.261 [2024-11-02 11:47:21.633690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.262 [2024-11-02 11:47:21.633703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.262 [2024-11-02 11:47:21.633737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.262 qpair failed and we were unable to recover it. 00:35:21.262 [2024-11-02 11:47:21.643584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.262 [2024-11-02 11:47:21.643705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.262 [2024-11-02 11:47:21.643732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.262 [2024-11-02 11:47:21.643747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.262 [2024-11-02 11:47:21.643763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.262 [2024-11-02 11:47:21.643791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.262 qpair failed and we were unable to recover it. 00:35:21.262 [2024-11-02 11:47:21.653595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.262 [2024-11-02 11:47:21.653735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.262 [2024-11-02 11:47:21.653760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.262 [2024-11-02 11:47:21.653774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.262 [2024-11-02 11:47:21.653787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.262 [2024-11-02 11:47:21.653816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.262 qpair failed and we were unable to recover it. 00:35:21.523 [2024-11-02 11:47:21.663637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.523 [2024-11-02 11:47:21.663764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.523 [2024-11-02 11:47:21.663790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.523 [2024-11-02 11:47:21.663805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.523 [2024-11-02 11:47:21.663818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.523 [2024-11-02 11:47:21.663846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.523 qpair failed and we were unable to recover it. 00:35:21.523 [2024-11-02 11:47:21.673733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.523 [2024-11-02 11:47:21.673854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.523 [2024-11-02 11:47:21.673880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.523 [2024-11-02 11:47:21.673895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.523 [2024-11-02 11:47:21.673908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.523 [2024-11-02 11:47:21.673936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.523 qpair failed and we were unable to recover it. 00:35:21.523 [2024-11-02 11:47:21.683650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.523 [2024-11-02 11:47:21.683776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.523 [2024-11-02 11:47:21.683802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.523 [2024-11-02 11:47:21.683816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.523 [2024-11-02 11:47:21.683829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.523 [2024-11-02 11:47:21.683857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.523 qpair failed and we were unable to recover it. 00:35:21.523 [2024-11-02 11:47:21.693722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.523 [2024-11-02 11:47:21.693857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.523 [2024-11-02 11:47:21.693884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.523 [2024-11-02 11:47:21.693899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.523 [2024-11-02 11:47:21.693916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.523 [2024-11-02 11:47:21.693947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.523 qpair failed and we were unable to recover it. 00:35:21.523 [2024-11-02 11:47:21.703733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.523 [2024-11-02 11:47:21.703860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.523 [2024-11-02 11:47:21.703886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.523 [2024-11-02 11:47:21.703900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.523 [2024-11-02 11:47:21.703913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.523 [2024-11-02 11:47:21.703942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.523 qpair failed and we were unable to recover it. 00:35:21.523 [2024-11-02 11:47:21.713760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.523 [2024-11-02 11:47:21.713885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.523 [2024-11-02 11:47:21.713911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.523 [2024-11-02 11:47:21.713925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.523 [2024-11-02 11:47:21.713937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.523 [2024-11-02 11:47:21.713966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.523 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.723803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.723929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.723955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.723975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.723989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.724017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.733895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.734065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.734090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.734104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.734117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.734145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.743843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.743997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.744023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.744037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.744049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.744079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.753884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.754014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.754039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.754053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.754066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.754094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.763929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.764049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.764074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.764088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.764101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.764134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.773954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.774070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.774096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.774109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.774121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.774149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.783961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.784099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.784126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.784140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.784152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.784181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.793981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.794109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.794134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.794149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.794161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.794190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.804000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.804134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.804160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.804174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.804187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.804216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.814013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.814166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.814191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.814206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.814218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.814248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.824065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.824196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.824221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.824236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.824249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.524 [2024-11-02 11:47:21.824284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.524 qpair failed and we were unable to recover it. 00:35:21.524 [2024-11-02 11:47:21.834097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.524 [2024-11-02 11:47:21.834223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.524 [2024-11-02 11:47:21.834248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.524 [2024-11-02 11:47:21.834269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.524 [2024-11-02 11:47:21.834284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.834312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.844138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.844280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.844310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.844325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.844339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.844369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.854145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.854268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.854295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.854323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.854336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.854365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.864204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.864371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.864396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.864410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.864423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.864453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.874250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.874381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.874406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.874420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.874433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.874462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.884287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.884419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.884445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.884459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.884471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.884501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.894317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.894475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.894501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.894515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.894528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.894562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.904362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.904503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.904529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.904543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.904556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.904584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.525 [2024-11-02 11:47:21.914367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.525 [2024-11-02 11:47:21.914496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.525 [2024-11-02 11:47:21.914522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.525 [2024-11-02 11:47:21.914536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.525 [2024-11-02 11:47:21.914549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.525 [2024-11-02 11:47:21.914578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.525 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.924379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.924500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.924527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.789 [2024-11-02 11:47:21.924542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.789 [2024-11-02 11:47:21.924555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.789 [2024-11-02 11:47:21.924584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.789 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.934397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.934526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.934553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.789 [2024-11-02 11:47:21.934567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.789 [2024-11-02 11:47:21.934581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.789 [2024-11-02 11:47:21.934610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.789 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.944477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.944640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.944665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.789 [2024-11-02 11:47:21.944679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.789 [2024-11-02 11:47:21.944693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.789 [2024-11-02 11:47:21.944721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.789 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.954434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.954557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.954583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.789 [2024-11-02 11:47:21.954598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.789 [2024-11-02 11:47:21.954611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.789 [2024-11-02 11:47:21.954638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.789 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.964496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.964628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.964653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.789 [2024-11-02 11:47:21.964668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.789 [2024-11-02 11:47:21.964681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.789 [2024-11-02 11:47:21.964709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.789 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.974501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.974614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.974640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.789 [2024-11-02 11:47:21.974654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.789 [2024-11-02 11:47:21.974667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.789 [2024-11-02 11:47:21.974694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.789 qpair failed and we were unable to recover it. 00:35:21.789 [2024-11-02 11:47:21.984547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.789 [2024-11-02 11:47:21.984675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.789 [2024-11-02 11:47:21.984701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:21.984723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:21.984737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:21.984765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:21.994586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:21.994724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:21.994749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:21.994763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:21.994776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:21.994805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.004594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.004709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.004734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.004748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.004761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.004790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.014644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.014759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.014784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.014798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.014811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.014839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.024664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.024796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.024821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.024835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.024848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.024882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.034659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.034777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.034802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.034816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.034829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.034856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.044686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.044803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.044829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.044843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.044856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.044884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.054750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.054878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.054904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.054918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.054932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.054960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.064771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.064897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.064922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.064937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.064950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.064978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.074779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.074924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.074949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.074963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.074976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.075003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.084845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.085021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.085046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.085061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.085074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.085102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.094862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.094977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.095003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.095017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.095032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.095060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.104917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.105042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.105067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.105081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.105094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.790 [2024-11-02 11:47:22.105122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.790 qpair failed and we were unable to recover it. 00:35:21.790 [2024-11-02 11:47:22.114924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.790 [2024-11-02 11:47:22.115055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.790 [2024-11-02 11:47:22.115081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.790 [2024-11-02 11:47:22.115100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.790 [2024-11-02 11:47:22.115114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.115142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.124955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.125075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.125101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.125115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.125128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.125158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.135014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.135159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.135185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.135199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.135211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.135239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.145113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.145251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.145286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.145301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.145316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.145345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.155030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.155153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.155179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.155193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.155206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.155241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.165046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.165168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.165194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.165208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.165221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.165251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.175066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.175237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.175269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.175285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.175298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.175326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:21.791 [2024-11-02 11:47:22.185124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.791 [2024-11-02 11:47:22.185275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.791 [2024-11-02 11:47:22.185303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.791 [2024-11-02 11:47:22.185318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.791 [2024-11-02 11:47:22.185331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:21.791 [2024-11-02 11:47:22.185360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.791 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.195161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.195293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.195322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.195337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.195350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.195380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.205167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.205311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.205337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.205351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.205363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.205394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.215197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.215326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.215352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.215366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.215379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.215409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.225250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.225383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.225408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.225422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.225436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.225464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.235250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.235397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.235422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.235436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.235449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.235478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.245349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.245500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.245526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.245546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.245560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.245588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.255347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.255461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.051 [2024-11-02 11:47:22.255487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.051 [2024-11-02 11:47:22.255501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.051 [2024-11-02 11:47:22.255514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.051 [2024-11-02 11:47:22.255542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.051 qpair failed and we were unable to recover it. 00:35:22.051 [2024-11-02 11:47:22.265393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.051 [2024-11-02 11:47:22.265557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.265583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.265597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.265612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.265640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.275436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.275558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.275583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.275597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.275610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.275639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.285399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.285555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.285580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.285594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.285609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.285642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.295412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.295527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.295552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.295567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.295580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.295608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.305486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.305635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.305661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.305675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.305688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.305716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.315511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.315639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.315665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.315679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.315693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.315720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.325532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.325662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.325688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.325702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.325715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.325743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.335523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.335661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.335687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.335701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.335713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.335741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.345611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.345783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.345808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.345823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.345836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.345864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.355606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.355728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.355753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.355767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.355780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.355809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.365710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.365838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.365865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.365880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.365897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.365927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.375661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.375786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.375812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.375833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.375847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.375876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.385682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.385805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.385831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.052 [2024-11-02 11:47:22.385846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.052 [2024-11-02 11:47:22.385858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.052 [2024-11-02 11:47:22.385887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.052 qpair failed and we were unable to recover it. 00:35:22.052 [2024-11-02 11:47:22.395692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.052 [2024-11-02 11:47:22.395810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.052 [2024-11-02 11:47:22.395836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.053 [2024-11-02 11:47:22.395850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.053 [2024-11-02 11:47:22.395863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.053 [2024-11-02 11:47:22.395891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.053 qpair failed and we were unable to recover it. 00:35:22.053 [2024-11-02 11:47:22.405739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.053 [2024-11-02 11:47:22.405865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.053 [2024-11-02 11:47:22.405891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.053 [2024-11-02 11:47:22.405906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.053 [2024-11-02 11:47:22.405919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.053 [2024-11-02 11:47:22.405947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.053 qpair failed and we were unable to recover it. 00:35:22.053 [2024-11-02 11:47:22.415802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.053 [2024-11-02 11:47:22.415922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.053 [2024-11-02 11:47:22.415948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.053 [2024-11-02 11:47:22.415962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.053 [2024-11-02 11:47:22.415975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.053 [2024-11-02 11:47:22.416013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.053 qpair failed and we were unable to recover it. 00:35:22.053 [2024-11-02 11:47:22.425841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.053 [2024-11-02 11:47:22.425969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.053 [2024-11-02 11:47:22.425994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.053 [2024-11-02 11:47:22.426009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.053 [2024-11-02 11:47:22.426022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.053 [2024-11-02 11:47:22.426050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.053 qpair failed and we were unable to recover it. 00:35:22.053 [2024-11-02 11:47:22.435823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.053 [2024-11-02 11:47:22.435980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.053 [2024-11-02 11:47:22.436006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.053 [2024-11-02 11:47:22.436020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.053 [2024-11-02 11:47:22.436033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.053 [2024-11-02 11:47:22.436061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.053 qpair failed and we were unable to recover it. 00:35:22.053 [2024-11-02 11:47:22.445842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.053 [2024-11-02 11:47:22.445971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.053 [2024-11-02 11:47:22.445997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.053 [2024-11-02 11:47:22.446011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.053 [2024-11-02 11:47:22.446023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.053 [2024-11-02 11:47:22.446050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.053 qpair failed and we were unable to recover it. 00:35:22.312 [2024-11-02 11:47:22.455921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.312 [2024-11-02 11:47:22.456038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.312 [2024-11-02 11:47:22.456065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.312 [2024-11-02 11:47:22.456080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.312 [2024-11-02 11:47:22.456093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.312 [2024-11-02 11:47:22.456122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.312 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.465912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.466039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.466066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.466081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.466094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.466123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.475967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.476096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.476122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.476136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.476149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.476177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.485992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.486122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.486147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.486160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.486172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.486200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.496033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.496152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.496178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.496192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.496205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.496233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.506065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.506188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.506219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.506234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.506247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.506284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.516084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.516212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.516238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.516252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.516272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.516301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.526097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.526237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.526270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.526285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.526299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.526327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.536152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.536282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.536308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.536322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.536335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.536363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.546161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.546296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.546322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.546336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.546349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.546383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.556205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.556324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.556349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.556363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.556378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.556406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.566238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.566405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.566431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.566444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.566457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.566486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.576215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.576339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.576365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.576379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.576392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.576421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.586303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.313 [2024-11-02 11:47:22.586468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.313 [2024-11-02 11:47:22.586493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.313 [2024-11-02 11:47:22.586507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.313 [2024-11-02 11:47:22.586520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.313 [2024-11-02 11:47:22.586548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.313 qpair failed and we were unable to recover it. 00:35:22.313 [2024-11-02 11:47:22.596287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.596406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.596432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.596445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.596459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.596488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.606334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.606456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.606482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.606496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.606509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.606537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.616342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.616456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.616481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.616495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.616508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.616536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.626384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.626508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.626533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.626547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.626560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.626588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.636411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.636540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.636569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.636584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.636597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.636625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.646411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.646543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.646569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.646583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.646596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.646623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.656442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.656556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.656581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.656595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.656607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.656635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.666491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.666665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.666692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.666713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.666727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ddc690 00:35:22.314 [2024-11-02 11:47:22.666756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.676505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.676621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.676654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.676670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.676684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9ccc000b90 00:35:22.314 [2024-11-02 11:47:22.676722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.686522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.686654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.686682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.686697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.686711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9ccc000b90 00:35:22.314 [2024-11-02 11:47:22.686741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.696563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.696682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.696715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.696731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.696744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cc0000b90 00:35:22.314 [2024-11-02 11:47:22.696776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.314 [2024-11-02 11:47:22.706625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.314 [2024-11-02 11:47:22.706795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.314 [2024-11-02 11:47:22.706823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.314 [2024-11-02 11:47:22.706837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.314 [2024-11-02 11:47:22.706851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cc0000b90 00:35:22.314 [2024-11-02 11:47:22.706882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:22.314 qpair failed and we were unable to recover it. 00:35:22.573 [2024-11-02 11:47:22.716629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.573 [2024-11-02 11:47:22.716769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.573 [2024-11-02 11:47:22.716804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.573 [2024-11-02 11:47:22.716821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.573 [2024-11-02 11:47:22.716835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cc4000b90 00:35:22.573 [2024-11-02 11:47:22.716867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:22.573 qpair failed and we were unable to recover it. 00:35:22.573 [2024-11-02 11:47:22.726670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.573 [2024-11-02 11:47:22.726847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.573 [2024-11-02 11:47:22.726875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.573 [2024-11-02 11:47:22.726890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.573 [2024-11-02 11:47:22.726903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cc4000b90 00:35:22.573 [2024-11-02 11:47:22.726935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:22.573 qpair failed and we were unable to recover it. 00:35:22.573 [2024-11-02 11:47:22.727029] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:35:22.573 A controller has encountered a failure and is being reset. 00:35:22.573 Controller properly reset. 00:35:22.573 Initializing NVMe Controllers 00:35:22.573 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:22.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:22.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:22.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:22.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:22.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:22.573 Initialization complete. Launching workers. 00:35:22.573 Starting thread on core 1 00:35:22.573 Starting thread on core 2 00:35:22.573 Starting thread on core 3 00:35:22.573 Starting thread on core 0 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:22.573 00:35:22.573 real 0m10.786s 00:35:22.573 user 0m18.722s 00:35:22.573 sys 0m5.176s 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:22.573 ************************************ 00:35:22.573 END TEST nvmf_target_disconnect_tc2 00:35:22.573 ************************************ 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.573 rmmod nvme_tcp 00:35:22.573 rmmod nvme_fabrics 00:35:22.573 rmmod nvme_keyring 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3982127 ']' 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3982127 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3982127 ']' 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3982127 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3982127 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3982127' 00:35:22.573 killing process with pid 3982127 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3982127 00:35:22.573 11:47:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3982127 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.833 11:47:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.370 11:47:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:25.370 00:35:25.370 real 0m15.656s 00:35:25.370 user 0m45.100s 00:35:25.370 sys 0m7.194s 00:35:25.370 11:47:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:25.370 11:47:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:25.370 ************************************ 00:35:25.370 END TEST nvmf_target_disconnect 00:35:25.370 ************************************ 00:35:25.370 11:47:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:25.370 00:35:25.370 real 6m42.726s 00:35:25.370 user 17m9.997s 00:35:25.370 sys 1m26.079s 00:35:25.370 11:47:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:25.370 11:47:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.370 ************************************ 00:35:25.370 END TEST nvmf_host 00:35:25.370 ************************************ 00:35:25.370 11:47:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:25.370 11:47:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:25.370 11:47:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:25.370 11:47:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:25.370 11:47:25 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:25.370 11:47:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.370 ************************************ 00:35:25.370 START TEST nvmf_target_core_interrupt_mode 00:35:25.370 ************************************ 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:25.370 * Looking for test storage... 00:35:25.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:25.370 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.371 --rc genhtml_branch_coverage=1 00:35:25.371 --rc genhtml_function_coverage=1 00:35:25.371 --rc genhtml_legend=1 00:35:25.371 --rc geninfo_all_blocks=1 00:35:25.371 --rc geninfo_unexecuted_blocks=1 00:35:25.371 00:35:25.371 ' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.371 --rc genhtml_branch_coverage=1 00:35:25.371 --rc genhtml_function_coverage=1 00:35:25.371 --rc genhtml_legend=1 00:35:25.371 --rc geninfo_all_blocks=1 00:35:25.371 --rc geninfo_unexecuted_blocks=1 00:35:25.371 00:35:25.371 ' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.371 --rc genhtml_branch_coverage=1 00:35:25.371 --rc genhtml_function_coverage=1 00:35:25.371 --rc genhtml_legend=1 00:35:25.371 --rc geninfo_all_blocks=1 00:35:25.371 --rc geninfo_unexecuted_blocks=1 00:35:25.371 00:35:25.371 ' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:25.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.371 --rc genhtml_branch_coverage=1 00:35:25.371 --rc genhtml_function_coverage=1 00:35:25.371 --rc genhtml_legend=1 00:35:25.371 --rc geninfo_all_blocks=1 00:35:25.371 --rc geninfo_unexecuted_blocks=1 00:35:25.371 00:35:25.371 ' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:25.371 ************************************ 00:35:25.371 START TEST nvmf_abort 00:35:25.371 ************************************ 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:25.371 * Looking for test storage... 00:35:25.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.371 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.372 --rc genhtml_branch_coverage=1 00:35:25.372 --rc genhtml_function_coverage=1 00:35:25.372 --rc genhtml_legend=1 00:35:25.372 --rc geninfo_all_blocks=1 00:35:25.372 --rc geninfo_unexecuted_blocks=1 00:35:25.372 00:35:25.372 ' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.372 --rc genhtml_branch_coverage=1 00:35:25.372 --rc genhtml_function_coverage=1 00:35:25.372 --rc genhtml_legend=1 00:35:25.372 --rc geninfo_all_blocks=1 00:35:25.372 --rc geninfo_unexecuted_blocks=1 00:35:25.372 00:35:25.372 ' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.372 --rc genhtml_branch_coverage=1 00:35:25.372 --rc genhtml_function_coverage=1 00:35:25.372 --rc genhtml_legend=1 00:35:25.372 --rc geninfo_all_blocks=1 00:35:25.372 --rc geninfo_unexecuted_blocks=1 00:35:25.372 00:35:25.372 ' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:25.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.372 --rc genhtml_branch_coverage=1 00:35:25.372 --rc genhtml_function_coverage=1 00:35:25.372 --rc genhtml_legend=1 00:35:25.372 --rc geninfo_all_blocks=1 00:35:25.372 --rc geninfo_unexecuted_blocks=1 00:35:25.372 00:35:25.372 ' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:25.372 11:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.275 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:27.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:27.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:27.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:27.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.276 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:35:27.536 00:35:27.536 --- 10.0.0.2 ping statistics --- 00:35:27.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.536 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:35:27.536 00:35:27.536 --- 10.0.0.1 ping statistics --- 00:35:27.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.536 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3984931 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3984931 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3984931 ']' 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:27.536 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.536 [2024-11-02 11:47:27.762002] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:27.536 [2024-11-02 11:47:27.763092] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:35:27.536 [2024-11-02 11:47:27.763144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.536 [2024-11-02 11:47:27.843638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:27.536 [2024-11-02 11:47:27.896047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.536 [2024-11-02 11:47:27.896118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.536 [2024-11-02 11:47:27.896135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.536 [2024-11-02 11:47:27.896148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.536 [2024-11-02 11:47:27.896159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.536 [2024-11-02 11:47:27.897805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.536 [2024-11-02 11:47:27.897924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.536 [2024-11-02 11:47:27.897927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.796 [2024-11-02 11:47:27.991164] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:27.796 [2024-11-02 11:47:27.991398] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:27.796 [2024-11-02 11:47:27.991419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:27.796 [2024-11-02 11:47:27.991688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.796 [2024-11-02 11:47:28.046694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.796 Malloc0 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.796 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.796 Delay0 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.797 [2024-11-02 11:47:28.118909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.797 11:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:27.797 [2024-11-02 11:47:28.188311] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:30.335 Initializing NVMe Controllers 00:35:30.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:30.335 controller IO queue size 128 less than required 00:35:30.335 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:30.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:30.335 Initialization complete. Launching workers. 00:35:30.335 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29219 00:35:30.335 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29276, failed to submit 66 00:35:30.335 success 29219, unsuccessful 57, failed 0 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.335 rmmod nvme_tcp 00:35:30.335 rmmod nvme_fabrics 00:35:30.335 rmmod nvme_keyring 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3984931 ']' 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3984931 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3984931 ']' 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3984931 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3984931 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3984931' 00:35:30.335 killing process with pid 3984931 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3984931 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3984931 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:30.335 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.336 11:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.875 00:35:32.875 real 0m7.358s 00:35:32.875 user 0m9.532s 00:35:32.875 sys 0m2.870s 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.875 ************************************ 00:35:32.875 END TEST nvmf_abort 00:35:32.875 ************************************ 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:32.875 ************************************ 00:35:32.875 START TEST nvmf_ns_hotplug_stress 00:35:32.875 ************************************ 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:32.875 * Looking for test storage... 00:35:32.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.875 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.876 --rc genhtml_branch_coverage=1 00:35:32.876 --rc genhtml_function_coverage=1 00:35:32.876 --rc genhtml_legend=1 00:35:32.876 --rc geninfo_all_blocks=1 00:35:32.876 --rc geninfo_unexecuted_blocks=1 00:35:32.876 00:35:32.876 ' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.876 --rc genhtml_branch_coverage=1 00:35:32.876 --rc genhtml_function_coverage=1 00:35:32.876 --rc genhtml_legend=1 00:35:32.876 --rc geninfo_all_blocks=1 00:35:32.876 --rc geninfo_unexecuted_blocks=1 00:35:32.876 00:35:32.876 ' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.876 --rc genhtml_branch_coverage=1 00:35:32.876 --rc genhtml_function_coverage=1 00:35:32.876 --rc genhtml_legend=1 00:35:32.876 --rc geninfo_all_blocks=1 00:35:32.876 --rc geninfo_unexecuted_blocks=1 00:35:32.876 00:35:32.876 ' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.876 --rc genhtml_branch_coverage=1 00:35:32.876 --rc genhtml_function_coverage=1 00:35:32.876 --rc genhtml_legend=1 00:35:32.876 --rc geninfo_all_blocks=1 00:35:32.876 --rc geninfo_unexecuted_blocks=1 00:35:32.876 00:35:32.876 ' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.876 11:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:34.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:34.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:34.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:34.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.778 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.779 11:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:35:34.779 00:35:34.779 --- 10.0.0.2 ping statistics --- 00:35:34.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.779 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:35:34.779 00:35:34.779 --- 10.0.0.1 ping statistics --- 00:35:34.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.779 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3987268 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3987268 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3987268 ']' 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:34.779 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:35.039 [2024-11-02 11:47:35.200004] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:35.039 [2024-11-02 11:47:35.201098] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:35:35.039 [2024-11-02 11:47:35.201161] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.039 [2024-11-02 11:47:35.281208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.039 [2024-11-02 11:47:35.330364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.039 [2024-11-02 11:47:35.330418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.039 [2024-11-02 11:47:35.330449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.039 [2024-11-02 11:47:35.330468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.039 [2024-11-02 11:47:35.330479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.039 [2024-11-02 11:47:35.331957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.039 [2024-11-02 11:47:35.332042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:35.039 [2024-11-02 11:47:35.332046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.039 [2024-11-02 11:47:35.421646] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:35.039 [2024-11-02 11:47:35.421858] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:35.039 [2024-11-02 11:47:35.421896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:35.039 [2024-11-02 11:47:35.422105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:35.298 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:35.556 [2024-11-02 11:47:35.724785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.556 11:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:35.814 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.072 [2024-11-02 11:47:36.281198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.072 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:36.330 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:36.589 Malloc0 00:35:36.589 11:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:36.847 Delay0 00:35:36.847 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:37.104 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:37.362 NULL1 00:35:37.362 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:37.620 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3987563 00:35:37.620 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:37.620 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:37.620 11:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:38.996 Read completed with error (sct=0, sc=11) 00:35:38.996 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:38.996 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:38.996 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:39.255 true 00:35:39.255 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:39.255 11:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:40.263 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:40.523 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:40.523 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:40.523 true 00:35:40.782 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:40.782 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.041 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:41.299 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:41.299 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:41.557 true 00:35:41.557 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:41.557 11:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.814 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:42.072 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:42.072 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:42.330 true 00:35:42.330 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:42.330 11:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.266 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:43.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:43.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:43.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:43.525 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:35:43.525 11:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:35:43.783 true 00:35:43.783 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:43.783 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:44.041 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:44.299 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:35:44.299 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:35:44.556 true 00:35:44.556 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:44.556 11:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:45.494 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:45.752 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:35:45.752 11:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:35:46.010 true 00:35:46.010 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:46.010 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:46.267 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:46.525 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:35:46.525 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:35:46.783 true 00:35:46.783 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:46.783 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:47.041 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:47.298 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:35:47.298 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:35:47.556 true 00:35:47.556 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:47.556 11:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:48.488 11:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:48.746 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:35:48.746 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:35:49.004 true 00:35:49.004 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:49.004 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:49.262 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:49.519 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:35:49.519 11:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:35:49.777 true 00:35:49.777 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:49.777 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:50.035 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:50.600 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:35:50.600 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:35:50.600 true 00:35:50.600 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:50.600 11:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:51.533 11:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:51.791 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:35:51.791 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:35:52.049 true 00:35:52.049 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:52.049 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:52.307 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:52.872 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:35:52.872 11:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:35:52.872 true 00:35:52.872 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:52.872 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:53.130 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:53.387 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:35:53.387 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:35:53.645 true 00:35:53.903 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:53.903 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:54.836 11:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:55.094 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:35:55.094 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:35:55.352 true 00:35:55.352 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:55.352 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:55.609 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:55.867 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:35:55.868 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:35:56.125 true 00:35:56.125 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:56.125 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:56.383 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:56.640 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:35:56.640 11:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:35:56.898 true 00:35:56.898 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:56.898 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.886 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.143 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:35:58.143 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:35:58.401 true 00:35:58.401 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:58.401 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:58.659 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:58.916 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:35:58.916 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:35:59.174 true 00:35:59.174 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:59.174 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.431 11:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:59.997 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:35:59.997 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:35:59.997 true 00:35:59.997 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:35:59.997 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:00.929 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:01.186 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:01.186 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:01.444 true 00:36:01.444 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:01.444 11:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.009 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:02.009 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:02.009 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:02.267 true 00:36:02.267 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:02.267 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.525 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:03.090 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:03.090 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:03.090 true 00:36:03.347 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:03.347 11:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.280 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:04.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.280 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:04.280 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:04.537 true 00:36:04.537 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:04.537 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.102 11:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:05.102 11:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:05.102 11:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:05.359 true 00:36:05.649 11:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:05.650 11:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.650 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:05.934 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:05.934 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:06.192 true 00:36:06.449 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:06.449 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:07.384 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:07.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:07.643 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:07.901 true 00:36:07.901 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:07.901 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:07.901 Initializing NVMe Controllers 00:36:07.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:07.901 Controller IO queue size 128, less than required. 00:36:07.901 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:07.901 Controller IO queue size 128, less than required. 00:36:07.901 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:07.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:07.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:07.901 Initialization complete. Launching workers. 00:36:07.901 ======================================================== 00:36:07.901 Latency(us) 00:36:07.901 Device Information : IOPS MiB/s Average min max 00:36:07.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 521.83 0.25 100149.07 3477.57 1042066.31 00:36:07.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8774.35 4.28 14588.26 2703.44 445107.72 00:36:07.901 ======================================================== 00:36:07.901 Total : 9296.18 4.54 19391.12 2703.44 1042066.31 00:36:07.901 00:36:08.160 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.418 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:08.418 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:08.676 true 00:36:08.676 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987563 00:36:08.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3987563) - No such process 00:36:08.676 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3987563 00:36:08.676 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.934 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:09.194 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:09.194 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:09.194 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:09.194 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:09.194 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:09.457 null0 00:36:09.458 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:09.458 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:09.458 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:09.717 null1 00:36:09.717 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:09.717 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:09.717 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:09.976 null2 00:36:09.976 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:09.976 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:09.976 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:10.234 null3 00:36:10.234 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:10.234 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:10.234 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:10.493 null4 00:36:10.493 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:10.493 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:10.493 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:10.753 null5 00:36:10.754 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:10.754 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:10.754 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:11.013 null6 00:36:11.013 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:11.013 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:11.013 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:11.272 null7 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.272 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3992193 3992194 3992196 3992197 3992200 3992202 3992204 3992206 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:11.273 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:11.841 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:12.099 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.099 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.099 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:12.099 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.099 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.099 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.100 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:12.358 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:12.616 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:12.617 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:12.875 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.133 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.134 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:13.392 11:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:13.961 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:14.219 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:14.219 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:14.219 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:14.219 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:14.219 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:14.219 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.220 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.478 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:14.736 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:14.994 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:15.252 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:15.511 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:15.771 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:15.771 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:16.029 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:16.029 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:16.029 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:16.029 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.029 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:16.029 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.287 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:16.546 11:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:16.805 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:17.064 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:17.322 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:17.323 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:17.323 rmmod nvme_tcp 00:36:17.581 rmmod nvme_fabrics 00:36:17.581 rmmod nvme_keyring 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3987268 ']' 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3987268 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3987268 ']' 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3987268 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3987268 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3987268' 00:36:17.581 killing process with pid 3987268 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3987268 00:36:17.581 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3987268 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:17.840 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:19.744 00:36:19.744 real 0m47.303s 00:36:19.744 user 3m17.877s 00:36:19.744 sys 0m22.120s 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:19.744 ************************************ 00:36:19.744 END TEST nvmf_ns_hotplug_stress 00:36:19.744 ************************************ 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:19.744 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:20.003 ************************************ 00:36:20.003 START TEST nvmf_delete_subsystem 00:36:20.003 ************************************ 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:20.003 * Looking for test storage... 00:36:20.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.003 --rc genhtml_branch_coverage=1 00:36:20.003 --rc genhtml_function_coverage=1 00:36:20.003 --rc genhtml_legend=1 00:36:20.003 --rc geninfo_all_blocks=1 00:36:20.003 --rc geninfo_unexecuted_blocks=1 00:36:20.003 00:36:20.003 ' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.003 --rc genhtml_branch_coverage=1 00:36:20.003 --rc genhtml_function_coverage=1 00:36:20.003 --rc genhtml_legend=1 00:36:20.003 --rc geninfo_all_blocks=1 00:36:20.003 --rc geninfo_unexecuted_blocks=1 00:36:20.003 00:36:20.003 ' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.003 --rc genhtml_branch_coverage=1 00:36:20.003 --rc genhtml_function_coverage=1 00:36:20.003 --rc genhtml_legend=1 00:36:20.003 --rc geninfo_all_blocks=1 00:36:20.003 --rc geninfo_unexecuted_blocks=1 00:36:20.003 00:36:20.003 ' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.003 --rc genhtml_branch_coverage=1 00:36:20.003 --rc genhtml_function_coverage=1 00:36:20.003 --rc genhtml_legend=1 00:36:20.003 --rc geninfo_all_blocks=1 00:36:20.003 --rc geninfo_unexecuted_blocks=1 00:36:20.003 00:36:20.003 ' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.003 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:20.004 11:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.907 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:21.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:21.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:21.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:21.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.908 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:36:22.167 00:36:22.167 --- 10.0.0.2 ping statistics --- 00:36:22.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.167 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:36:22.167 00:36:22.167 --- 10.0.0.1 ping statistics --- 00:36:22.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.167 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3995061 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3995061 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3995061 ']' 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:22.167 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.167 [2024-11-02 11:48:22.475097] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:22.167 [2024-11-02 11:48:22.476208] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:36:22.167 [2024-11-02 11:48:22.476281] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.167 [2024-11-02 11:48:22.556892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:22.426 [2024-11-02 11:48:22.604751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.426 [2024-11-02 11:48:22.604818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.426 [2024-11-02 11:48:22.604834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.426 [2024-11-02 11:48:22.604848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.426 [2024-11-02 11:48:22.604861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.426 [2024-11-02 11:48:22.606293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.426 [2024-11-02 11:48:22.606315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:22.426 [2024-11-02 11:48:22.697526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:22.426 [2024-11-02 11:48:22.697616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:22.426 [2024-11-02 11:48:22.697862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 [2024-11-02 11:48:22.747024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 [2024-11-02 11:48:22.767275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 NULL1 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 Delay0 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3995090 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:22.426 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:22.684 [2024-11-02 11:48:22.851333] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:24.579 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:24.579 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.579 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 Read completed with error (sct=0, sc=8) 00:36:24.579 starting I/O failed: -6 00:36:24.579 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 [2024-11-02 11:48:24.972655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115a510 is same with the state(6) to be set 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 starting I/O failed: -6 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 [2024-11-02 11:48:24.974111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9588000c00 is same with the state(6) to be set 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 [2024-11-02 11:48:24.974589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115a150 is same with the state(6) to be set 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Read completed with error (sct=0, sc=8) 00:36:24.580 Write completed with error (sct=0, sc=8) 00:36:25.955 [2024-11-02 11:48:25.948812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158190 is same with the state(6) to be set 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 [2024-11-02 11:48:25.976927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f958800cfe0 is same with the state(6) to be set 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 [2024-11-02 11:48:25.977202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f958800d7a0 is same with the state(6) to be set 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 [2024-11-02 11:48:25.977379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1159f70 is same with the state(6) to be set 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Write completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 Read completed with error (sct=0, sc=8) 00:36:25.955 [2024-11-02 11:48:25.978079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115a330 is same with the state(6) to be set 00:36:25.955 Initializing NVMe Controllers 00:36:25.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:25.955 Controller IO queue size 128, less than required. 00:36:25.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:25.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:25.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:25.955 Initialization complete. Launching workers. 00:36:25.955 ======================================================== 00:36:25.955 Latency(us) 00:36:25.955 Device Information : IOPS MiB/s Average min max 00:36:25.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.89 0.08 916542.54 1792.24 1012024.89 00:36:25.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.85 0.08 901672.86 532.33 1012381.74 00:36:25.955 ======================================================== 00:36:25.955 Total : 327.75 0.16 908972.52 532.33 1012381.74 00:36:25.955 00:36:25.955 [2024-11-02 11:48:25.978521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1158190 (9): Bad file descriptor 00:36:25.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:25.955 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.955 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:25.955 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3995090 00:36:25.955 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3995090 00:36:26.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3995090) - No such process 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3995090 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3995090 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3995090 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:26.214 [2024-11-02 11:48:26.499280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3995489 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:26.214 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:26.214 [2024-11-02 11:48:26.559718] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:26.780 11:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:26.780 11:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:26.780 11:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:27.345 11:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:27.345 11:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:27.345 11:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:27.910 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:27.910 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:27.910 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:28.168 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:28.168 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:28.168 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:28.733 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:28.733 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:28.733 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:29.299 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:29.299 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:29.299 11:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:29.556 Initializing NVMe Controllers 00:36:29.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:29.556 Controller IO queue size 128, less than required. 00:36:29.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:29.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:29.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:29.556 Initialization complete. Launching workers. 00:36:29.556 ======================================================== 00:36:29.556 Latency(us) 00:36:29.556 Device Information : IOPS MiB/s Average min max 00:36:29.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005162.10 1000244.23 1041679.52 00:36:29.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004928.96 1000195.16 1041772.48 00:36:29.556 ======================================================== 00:36:29.556 Total : 256.00 0.12 1005045.53 1000195.16 1041772.48 00:36:29.556 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3995489 00:36:29.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3995489) - No such process 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3995489 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:29.814 rmmod nvme_tcp 00:36:29.814 rmmod nvme_fabrics 00:36:29.814 rmmod nvme_keyring 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:29.814 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3995061 ']' 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3995061 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3995061 ']' 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3995061 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3995061 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3995061' 00:36:29.815 killing process with pid 3995061 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3995061 00:36:29.815 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3995061 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:30.105 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:32.033 00:36:32.033 real 0m12.227s 00:36:32.033 user 0m24.581s 00:36:32.033 sys 0m3.780s 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:32.033 ************************************ 00:36:32.033 END TEST nvmf_delete_subsystem 00:36:32.033 ************************************ 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:32.033 ************************************ 00:36:32.033 START TEST nvmf_host_management 00:36:32.033 ************************************ 00:36:32.033 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:32.292 * Looking for test storage... 00:36:32.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:32.292 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:32.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.293 --rc genhtml_branch_coverage=1 00:36:32.293 --rc genhtml_function_coverage=1 00:36:32.293 --rc genhtml_legend=1 00:36:32.293 --rc geninfo_all_blocks=1 00:36:32.293 --rc geninfo_unexecuted_blocks=1 00:36:32.293 00:36:32.293 ' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:32.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.293 --rc genhtml_branch_coverage=1 00:36:32.293 --rc genhtml_function_coverage=1 00:36:32.293 --rc genhtml_legend=1 00:36:32.293 --rc geninfo_all_blocks=1 00:36:32.293 --rc geninfo_unexecuted_blocks=1 00:36:32.293 00:36:32.293 ' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:32.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.293 --rc genhtml_branch_coverage=1 00:36:32.293 --rc genhtml_function_coverage=1 00:36:32.293 --rc genhtml_legend=1 00:36:32.293 --rc geninfo_all_blocks=1 00:36:32.293 --rc geninfo_unexecuted_blocks=1 00:36:32.293 00:36:32.293 ' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:32.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.293 --rc genhtml_branch_coverage=1 00:36:32.293 --rc genhtml_function_coverage=1 00:36:32.293 --rc genhtml_legend=1 00:36:32.293 --rc geninfo_all_blocks=1 00:36:32.293 --rc geninfo_unexecuted_blocks=1 00:36:32.293 00:36:32.293 ' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:32.293 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:34.195 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:34.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:34.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:34.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:34.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.196 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:34.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:36:34.455 00:36:34.455 --- 10.0.0.2 ping statistics --- 00:36:34.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.455 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:36:34.455 00:36:34.455 --- 10.0.0.1 ping statistics --- 00:36:34.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.455 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3997942 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3997942 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3997942 ']' 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.455 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.455 [2024-11-02 11:48:34.761503] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:34.455 [2024-11-02 11:48:34.762651] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:36:34.455 [2024-11-02 11:48:34.762720] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.455 [2024-11-02 11:48:34.835639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:34.714 [2024-11-02 11:48:34.884444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.714 [2024-11-02 11:48:34.884496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.714 [2024-11-02 11:48:34.884524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.714 [2024-11-02 11:48:34.884536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.714 [2024-11-02 11:48:34.884545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.714 [2024-11-02 11:48:34.886057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.714 [2024-11-02 11:48:34.886121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:34.714 [2024-11-02 11:48:34.886160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:34.714 [2024-11-02 11:48:34.886163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.714 [2024-11-02 11:48:34.968202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:34.714 [2024-11-02 11:48:34.968363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:34.714 [2024-11-02 11:48:34.968636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:34.714 [2024-11-02 11:48:34.969171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:34.715 [2024-11-02 11:48:34.969432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:34.715 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:34.715 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:36:34.715 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:34.715 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:34.715 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.715 [2024-11-02 11:48:35.022911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.715 Malloc0 00:36:34.715 [2024-11-02 11:48:35.095034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:34.715 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3997990 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3997990 /var/tmp/bdevperf.sock 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3997990 ']' 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:34.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.973 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:34.974 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:34.974 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:34.974 { 00:36:34.974 "params": { 00:36:34.974 "name": "Nvme$subsystem", 00:36:34.974 "trtype": "$TEST_TRANSPORT", 00:36:34.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:34.974 "adrfam": "ipv4", 00:36:34.974 "trsvcid": "$NVMF_PORT", 00:36:34.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:34.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:34.974 "hdgst": ${hdgst:-false}, 00:36:34.974 "ddgst": ${ddgst:-false} 00:36:34.974 }, 00:36:34.974 "method": "bdev_nvme_attach_controller" 00:36:34.974 } 00:36:34.974 EOF 00:36:34.974 )") 00:36:34.974 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:34.974 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:34.974 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:34.974 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:34.974 "params": { 00:36:34.974 "name": "Nvme0", 00:36:34.974 "trtype": "tcp", 00:36:34.974 "traddr": "10.0.0.2", 00:36:34.974 "adrfam": "ipv4", 00:36:34.974 "trsvcid": "4420", 00:36:34.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:34.974 "hdgst": false, 00:36:34.974 "ddgst": false 00:36:34.974 }, 00:36:34.974 "method": "bdev_nvme_attach_controller" 00:36:34.974 }' 00:36:34.974 [2024-11-02 11:48:35.169783] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:36:34.974 [2024-11-02 11:48:35.169858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997990 ] 00:36:34.974 [2024-11-02 11:48:35.239427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.974 [2024-11-02 11:48:35.286900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.232 Running I/O for 10 seconds... 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:35.232 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:35.233 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.233 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.233 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.233 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:36:35.233 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:36:35.233 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.491 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.751 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.751 [2024-11-02 11:48:35.903136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.903972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.903985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.751 [2024-11-02 11:48:35.904000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.751 [2024-11-02 11:48:35.904013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.904978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.904993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.905006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.905020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.905033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.905048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-11-02 11:48:35.905061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.752 [2024-11-02 11:48:35.905101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:35.752 [2024-11-02 11:48:35.906328] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:35.752 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.752 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:35.752 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.752 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.752 task offset: 76672 on job bdev=Nvme0n1 fails 00:36:35.752 00:36:35.752 Latency(us) 00:36:35.752 [2024-11-02T10:48:36.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:35.753 Job: Nvme0n1 ended in about 0.40 seconds with error 00:36:35.753 Verification LBA range: start 0x0 length 0x400 00:36:35.753 Nvme0n1 : 0.40 1424.11 89.01 158.23 0.00 39315.36 2415.12 35729.26 00:36:35.753 [2024-11-02T10:48:36.155Z] =================================================================================================================== 00:36:35.753 [2024-11-02T10:48:36.155Z] Total : 1424.11 89.01 158.23 0.00 39315.36 2415.12 35729.26 00:36:35.753 [2024-11-02 11:48:35.908263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:35.753 [2024-11-02 11:48:35.908292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591970 (9): Bad file descriptor 00:36:35.753 [2024-11-02 11:48:35.909541] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:36:35.753 [2024-11-02 11:48:35.909691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:35.753 [2024-11-02 11:48:35.909722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:35.753 [2024-11-02 11:48:35.909745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:36:35.753 [2024-11-02 11:48:35.909762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:36:35.753 [2024-11-02 11:48:35.909777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.753 [2024-11-02 11:48:35.909789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1591970 00:36:35.753 [2024-11-02 11:48:35.909824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591970 (9): Bad file descriptor 00:36:35.753 [2024-11-02 11:48:35.909849] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:35.753 [2024-11-02 11:48:35.909863] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:35.753 [2024-11-02 11:48:35.909878] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:35.753 [2024-11-02 11:48:35.909902] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:35.753 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.753 11:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3997990 00:36:36.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3997990) - No such process 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:36.686 { 00:36:36.686 "params": { 00:36:36.686 "name": "Nvme$subsystem", 00:36:36.686 "trtype": "$TEST_TRANSPORT", 00:36:36.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.686 "adrfam": "ipv4", 00:36:36.686 "trsvcid": "$NVMF_PORT", 00:36:36.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.686 "hdgst": ${hdgst:-false}, 00:36:36.686 "ddgst": ${ddgst:-false} 00:36:36.686 }, 00:36:36.686 "method": "bdev_nvme_attach_controller" 00:36:36.686 } 00:36:36.686 EOF 00:36:36.686 )") 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:36.686 11:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:36.686 "params": { 00:36:36.686 "name": "Nvme0", 00:36:36.686 "trtype": "tcp", 00:36:36.686 "traddr": "10.0.0.2", 00:36:36.686 "adrfam": "ipv4", 00:36:36.686 "trsvcid": "4420", 00:36:36.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.686 "hdgst": false, 00:36:36.686 "ddgst": false 00:36:36.686 }, 00:36:36.686 "method": "bdev_nvme_attach_controller" 00:36:36.686 }' 00:36:36.686 [2024-11-02 11:48:36.966012] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:36:36.686 [2024-11-02 11:48:36.966105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998198 ] 00:36:36.686 [2024-11-02 11:48:37.038316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.686 [2024-11-02 11:48:37.084573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.944 Running I/O for 1 seconds... 00:36:38.319 1408.00 IOPS, 88.00 MiB/s 00:36:38.320 Latency(us) 00:36:38.320 [2024-11-02T10:48:38.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.320 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:38.320 Verification LBA range: start 0x0 length 0x400 00:36:38.320 Nvme0n1 : 1.03 1431.02 89.44 0.00 0.00 44052.05 8107.05 37671.06 00:36:38.320 [2024-11-02T10:48:38.722Z] =================================================================================================================== 00:36:38.320 [2024-11-02T10:48:38.722Z] Total : 1431.02 89.44 0.00 0.00 44052.05 8107.05 37671.06 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.320 rmmod nvme_tcp 00:36:38.320 rmmod nvme_fabrics 00:36:38.320 rmmod nvme_keyring 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3997942 ']' 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3997942 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3997942 ']' 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3997942 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3997942 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3997942' 00:36:38.320 killing process with pid 3997942 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3997942 00:36:38.320 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3997942 00:36:38.579 [2024-11-02 11:48:38.791981] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.579 11:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.481 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.481 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:40.481 00:36:40.481 real 0m8.455s 00:36:40.481 user 0m16.221s 00:36:40.481 sys 0m3.655s 00:36:40.481 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:40.481 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:40.481 ************************************ 00:36:40.481 END TEST nvmf_host_management 00:36:40.481 ************************************ 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:40.740 ************************************ 00:36:40.740 START TEST nvmf_lvol 00:36:40.740 ************************************ 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:40.740 * Looking for test storage... 00:36:40.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:36:40.740 11:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.740 --rc genhtml_branch_coverage=1 00:36:40.740 --rc genhtml_function_coverage=1 00:36:40.740 --rc genhtml_legend=1 00:36:40.740 --rc geninfo_all_blocks=1 00:36:40.740 --rc geninfo_unexecuted_blocks=1 00:36:40.740 00:36:40.740 ' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.740 --rc genhtml_branch_coverage=1 00:36:40.740 --rc genhtml_function_coverage=1 00:36:40.740 --rc genhtml_legend=1 00:36:40.740 --rc geninfo_all_blocks=1 00:36:40.740 --rc geninfo_unexecuted_blocks=1 00:36:40.740 00:36:40.740 ' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.740 --rc genhtml_branch_coverage=1 00:36:40.740 --rc genhtml_function_coverage=1 00:36:40.740 --rc genhtml_legend=1 00:36:40.740 --rc geninfo_all_blocks=1 00:36:40.740 --rc geninfo_unexecuted_blocks=1 00:36:40.740 00:36:40.740 ' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.740 --rc genhtml_branch_coverage=1 00:36:40.740 --rc genhtml_function_coverage=1 00:36:40.740 --rc genhtml_legend=1 00:36:40.740 --rc geninfo_all_blocks=1 00:36:40.740 --rc geninfo_unexecuted_blocks=1 00:36:40.740 00:36:40.740 ' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.740 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:40.741 11:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:42.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:42.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:42.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:42.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.685 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.686 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:42.686 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:42.686 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.686 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:42.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:36:42.944 00:36:42.944 --- 10.0.0.2 ping statistics --- 00:36:42.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.944 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:36:42.944 00:36:42.944 --- 10.0.0.1 ping statistics --- 00:36:42.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.944 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4000342 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4000342 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 4000342 ']' 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:42.944 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.945 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:42.945 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:42.945 [2024-11-02 11:48:43.243606] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:42.945 [2024-11-02 11:48:43.244686] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:36:42.945 [2024-11-02 11:48:43.244751] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.945 [2024-11-02 11:48:43.316329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:43.203 [2024-11-02 11:48:43.362723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.203 [2024-11-02 11:48:43.362785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.203 [2024-11-02 11:48:43.362800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.203 [2024-11-02 11:48:43.362812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.203 [2024-11-02 11:48:43.362821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.203 [2024-11-02 11:48:43.364287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.203 [2024-11-02 11:48:43.368274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:43.203 [2024-11-02 11:48:43.368284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.203 [2024-11-02 11:48:43.452444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:43.203 [2024-11-02 11:48:43.452623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:43.203 [2024-11-02 11:48:43.452689] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:43.203 [2024-11-02 11:48:43.452932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.203 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:43.461 [2024-11-02 11:48:43.764988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.461 11:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:44.028 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:36:44.028 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:44.286 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:36:44.286 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:36:44.545 11:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:36:44.802 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8d397a98-32aa-42aa-8e8c-49111d98dc34 00:36:44.802 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d397a98-32aa-42aa-8e8c-49111d98dc34 lvol 20 00:36:45.060 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=850ca27a-1910-483e-93b6-2304e7ecfa3d 00:36:45.060 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:45.318 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 850ca27a-1910-483e-93b6-2304e7ecfa3d 00:36:45.576 11:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.834 [2024-11-02 11:48:46.085122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.834 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:46.092 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4000763 00:36:46.092 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:36:46.092 11:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:36:47.026 11:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 850ca27a-1910-483e-93b6-2304e7ecfa3d MY_SNAPSHOT 00:36:47.284 11:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bdedc7d4-6b65-4df9-931f-a545ccd98553 00:36:47.284 11:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 850ca27a-1910-483e-93b6-2304e7ecfa3d 30 00:36:47.850 11:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bdedc7d4-6b65-4df9-931f-a545ccd98553 MY_CLONE 00:36:48.108 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a2f01616-36aa-4d6c-9cc2-f35364e2fd86 00:36:48.108 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a2f01616-36aa-4d6c-9cc2-f35364e2fd86 00:36:48.674 11:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4000763 00:36:56.784 Initializing NVMe Controllers 00:36:56.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:56.784 Controller IO queue size 128, less than required. 00:36:56.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:56.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:36:56.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:36:56.784 Initialization complete. Launching workers. 00:36:56.784 ======================================================== 00:36:56.784 Latency(us) 00:36:56.784 Device Information : IOPS MiB/s Average min max 00:36:56.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10727.00 41.90 11935.71 1338.79 83094.13 00:36:56.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10484.80 40.96 12210.99 2399.06 82625.47 00:36:56.784 ======================================================== 00:36:56.784 Total : 21211.80 82.86 12071.78 1338.79 83094.13 00:36:56.784 00:36:56.784 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:56.784 11:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 850ca27a-1910-483e-93b6-2304e7ecfa3d 00:36:57.043 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d397a98-32aa-42aa-8e8c-49111d98dc34 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.301 rmmod nvme_tcp 00:36:57.301 rmmod nvme_fabrics 00:36:57.301 rmmod nvme_keyring 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4000342 ']' 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4000342 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 4000342 ']' 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 4000342 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4000342 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4000342' 00:36:57.301 killing process with pid 4000342 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 4000342 00:36:57.301 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 4000342 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.559 11:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:00.091 00:37:00.091 real 0m19.049s 00:37:00.091 user 0m55.804s 00:37:00.091 sys 0m7.802s 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:00.091 ************************************ 00:37:00.091 END TEST nvmf_lvol 00:37:00.091 ************************************ 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:00.091 11:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:00.091 ************************************ 00:37:00.091 START TEST nvmf_lvs_grow 00:37:00.091 ************************************ 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:00.091 * Looking for test storage... 00:37:00.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:00.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.091 --rc genhtml_branch_coverage=1 00:37:00.091 --rc genhtml_function_coverage=1 00:37:00.091 --rc genhtml_legend=1 00:37:00.091 --rc geninfo_all_blocks=1 00:37:00.091 --rc geninfo_unexecuted_blocks=1 00:37:00.091 00:37:00.091 ' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:00.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.091 --rc genhtml_branch_coverage=1 00:37:00.091 --rc genhtml_function_coverage=1 00:37:00.091 --rc genhtml_legend=1 00:37:00.091 --rc geninfo_all_blocks=1 00:37:00.091 --rc geninfo_unexecuted_blocks=1 00:37:00.091 00:37:00.091 ' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:00.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.091 --rc genhtml_branch_coverage=1 00:37:00.091 --rc genhtml_function_coverage=1 00:37:00.091 --rc genhtml_legend=1 00:37:00.091 --rc geninfo_all_blocks=1 00:37:00.091 --rc geninfo_unexecuted_blocks=1 00:37:00.091 00:37:00.091 ' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:00.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.091 --rc genhtml_branch_coverage=1 00:37:00.091 --rc genhtml_function_coverage=1 00:37:00.091 --rc genhtml_legend=1 00:37:00.091 --rc geninfo_all_blocks=1 00:37:00.091 --rc geninfo_unexecuted_blocks=1 00:37:00.091 00:37:00.091 ' 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:00.091 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:00.092 11:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:01.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:01.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:01.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:01.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:01.995 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:01.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:37:01.996 00:37:01.996 --- 10.0.0.2 ping statistics --- 00:37:01.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.996 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:37:01.996 00:37:01.996 --- 10.0.0.1 ping statistics --- 00:37:01.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.996 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4004014 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4004014 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 4004014 ']' 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:01.996 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:01.996 [2024-11-02 11:49:02.361532] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:01.996 [2024-11-02 11:49:02.362606] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:01.996 [2024-11-02 11:49:02.362673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.254 [2024-11-02 11:49:02.435490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.254 [2024-11-02 11:49:02.480096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.254 [2024-11-02 11:49:02.480165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.254 [2024-11-02 11:49:02.480179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.254 [2024-11-02 11:49:02.480190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.254 [2024-11-02 11:49:02.480214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.254 [2024-11-02 11:49:02.480838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.254 [2024-11-02 11:49:02.563216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:02.254 [2024-11-02 11:49:02.563584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.254 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:02.513 [2024-11-02 11:49:02.877522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.513 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:02.513 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:02.513 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:02.513 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.772 ************************************ 00:37:02.772 START TEST lvs_grow_clean 00:37:02.772 ************************************ 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:02.772 11:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:03.030 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:03.030 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:03.308 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b486e55f-4985-4752-9396-d17f21d959dd 00:37:03.308 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:03.308 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:03.574 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:03.574 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:03.574 11:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b486e55f-4985-4752-9396-d17f21d959dd lvol 150 00:37:03.832 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a 00:37:03.832 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:03.832 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:04.090 [2024-11-02 11:49:04.357365] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:04.090 [2024-11-02 11:49:04.357464] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:04.090 true 00:37:04.090 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:04.090 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:04.348 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:04.348 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:04.606 11:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a 00:37:04.864 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.122 [2024-11-02 11:49:05.457695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.122 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4004448 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4004448 /var/tmp/bdevperf.sock 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 4004448 ']' 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:05.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:05.688 11:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:05.688 [2024-11-02 11:49:05.836617] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:05.688 [2024-11-02 11:49:05.836715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004448 ] 00:37:05.688 [2024-11-02 11:49:05.909467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.688 [2024-11-02 11:49:05.964374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.947 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:05.947 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:37:05.947 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:06.206 Nvme0n1 00:37:06.206 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:06.464 [ 00:37:06.464 { 00:37:06.464 "name": "Nvme0n1", 00:37:06.464 "aliases": [ 00:37:06.464 "2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a" 00:37:06.464 ], 00:37:06.464 "product_name": "NVMe disk", 00:37:06.464 "block_size": 4096, 00:37:06.464 "num_blocks": 38912, 00:37:06.464 "uuid": "2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a", 00:37:06.464 "numa_id": 0, 00:37:06.464 "assigned_rate_limits": { 00:37:06.464 "rw_ios_per_sec": 0, 00:37:06.464 "rw_mbytes_per_sec": 0, 00:37:06.464 "r_mbytes_per_sec": 0, 00:37:06.464 "w_mbytes_per_sec": 0 00:37:06.464 }, 00:37:06.464 "claimed": false, 00:37:06.464 "zoned": false, 00:37:06.464 "supported_io_types": { 00:37:06.464 "read": true, 00:37:06.464 "write": true, 00:37:06.464 "unmap": true, 00:37:06.464 "flush": true, 00:37:06.464 "reset": true, 00:37:06.464 "nvme_admin": true, 00:37:06.464 "nvme_io": true, 00:37:06.464 "nvme_io_md": false, 00:37:06.464 "write_zeroes": true, 00:37:06.464 "zcopy": false, 00:37:06.464 "get_zone_info": false, 00:37:06.464 "zone_management": false, 00:37:06.464 "zone_append": false, 00:37:06.464 "compare": true, 00:37:06.464 "compare_and_write": true, 00:37:06.464 "abort": true, 00:37:06.464 "seek_hole": false, 00:37:06.464 "seek_data": false, 00:37:06.464 "copy": true, 00:37:06.464 "nvme_iov_md": false 00:37:06.464 }, 00:37:06.464 "memory_domains": [ 00:37:06.464 { 00:37:06.464 "dma_device_id": "system", 00:37:06.464 "dma_device_type": 1 00:37:06.464 } 00:37:06.464 ], 00:37:06.464 "driver_specific": { 00:37:06.464 "nvme": [ 00:37:06.464 { 00:37:06.464 "trid": { 00:37:06.464 "trtype": "TCP", 00:37:06.464 "adrfam": "IPv4", 00:37:06.464 "traddr": "10.0.0.2", 00:37:06.464 "trsvcid": "4420", 00:37:06.464 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:06.464 }, 00:37:06.464 "ctrlr_data": { 00:37:06.464 "cntlid": 1, 00:37:06.465 "vendor_id": "0x8086", 00:37:06.465 "model_number": "SPDK bdev Controller", 00:37:06.465 "serial_number": "SPDK0", 00:37:06.465 "firmware_revision": "25.01", 00:37:06.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.465 "oacs": { 00:37:06.465 "security": 0, 00:37:06.465 "format": 0, 00:37:06.465 "firmware": 0, 00:37:06.465 "ns_manage": 0 00:37:06.465 }, 00:37:06.465 "multi_ctrlr": true, 00:37:06.465 "ana_reporting": false 00:37:06.465 }, 00:37:06.465 "vs": { 00:37:06.465 "nvme_version": "1.3" 00:37:06.465 }, 00:37:06.465 "ns_data": { 00:37:06.465 "id": 1, 00:37:06.465 "can_share": true 00:37:06.465 } 00:37:06.465 } 00:37:06.465 ], 00:37:06.465 "mp_policy": "active_passive" 00:37:06.465 } 00:37:06.465 } 00:37:06.465 ] 00:37:06.465 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4004585 00:37:06.465 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:06.465 11:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:06.465 Running I/O for 10 seconds... 00:37:07.841 Latency(us) 00:37:07.841 [2024-11-02T10:49:08.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:07.841 Nvme0n1 : 1.00 14727.00 57.53 0.00 0.00 0.00 0.00 0.00 00:37:07.841 [2024-11-02T10:49:08.243Z] =================================================================================================================== 00:37:07.841 [2024-11-02T10:49:08.243Z] Total : 14727.00 57.53 0.00 0.00 0.00 0.00 0.00 00:37:07.841 00:37:08.408 11:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:08.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.666 Nvme0n1 : 2.00 14473.50 56.54 0.00 0.00 0.00 0.00 0.00 00:37:08.666 [2024-11-02T10:49:09.068Z] =================================================================================================================== 00:37:08.666 [2024-11-02T10:49:09.068Z] Total : 14473.50 56.54 0.00 0.00 0.00 0.00 0.00 00:37:08.666 00:37:08.666 true 00:37:08.666 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:08.666 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:09.232 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:09.232 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:09.232 11:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4004585 00:37:09.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:09.490 Nvme0n1 : 3.00 14386.67 56.20 0.00 0.00 0.00 0.00 0.00 00:37:09.490 [2024-11-02T10:49:09.892Z] =================================================================================================================== 00:37:09.490 [2024-11-02T10:49:09.892Z] Total : 14386.67 56.20 0.00 0.00 0.00 0.00 0.00 00:37:09.490 00:37:10.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:10.865 Nvme0n1 : 4.00 14356.75 56.08 0.00 0.00 0.00 0.00 0.00 00:37:10.865 [2024-11-02T10:49:11.267Z] =================================================================================================================== 00:37:10.865 [2024-11-02T10:49:11.267Z] Total : 14356.75 56.08 0.00 0.00 0.00 0.00 0.00 00:37:10.865 00:37:11.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:11.799 Nvme0n1 : 5.00 14388.80 56.21 0.00 0.00 0.00 0.00 0.00 00:37:11.799 [2024-11-02T10:49:12.201Z] =================================================================================================================== 00:37:11.799 [2024-11-02T10:49:12.201Z] Total : 14388.80 56.21 0.00 0.00 0.00 0.00 0.00 00:37:11.799 00:37:12.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:12.734 Nvme0n1 : 6.00 14390.00 56.21 0.00 0.00 0.00 0.00 0.00 00:37:12.734 [2024-11-02T10:49:13.136Z] =================================================================================================================== 00:37:12.734 [2024-11-02T10:49:13.136Z] Total : 14390.00 56.21 0.00 0.00 0.00 0.00 0.00 00:37:12.734 00:37:13.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.669 Nvme0n1 : 7.00 14535.57 56.78 0.00 0.00 0.00 0.00 0.00 00:37:13.669 [2024-11-02T10:49:14.071Z] =================================================================================================================== 00:37:13.669 [2024-11-02T10:49:14.071Z] Total : 14535.57 56.78 0.00 0.00 0.00 0.00 0.00 00:37:13.669 00:37:14.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:14.604 Nvme0n1 : 8.00 14557.25 56.86 0.00 0.00 0.00 0.00 0.00 00:37:14.604 [2024-11-02T10:49:15.006Z] =================================================================================================================== 00:37:14.604 [2024-11-02T10:49:15.006Z] Total : 14557.25 56.86 0.00 0.00 0.00 0.00 0.00 00:37:14.604 00:37:15.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:15.538 Nvme0n1 : 9.00 14559.67 56.87 0.00 0.00 0.00 0.00 0.00 00:37:15.538 [2024-11-02T10:49:15.940Z] =================================================================================================================== 00:37:15.538 [2024-11-02T10:49:15.940Z] Total : 14559.67 56.87 0.00 0.00 0.00 0.00 0.00 00:37:15.538 00:37:16.915 00:37:16.915 Latency(us) 00:37:16.915 [2024-11-02T10:49:17.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:16.915 Nvme0n1 : 10.00 14571.04 56.92 0.00 0.00 8779.79 3155.44 17767.54 00:37:16.915 [2024-11-02T10:49:17.317Z] =================================================================================================================== 00:37:16.915 [2024-11-02T10:49:17.317Z] Total : 14571.04 56.92 0.00 0.00 8779.79 3155.44 17767.54 00:37:16.915 { 00:37:16.915 "results": [ 00:37:16.915 { 00:37:16.915 "job": "Nvme0n1", 00:37:16.915 "core_mask": "0x2", 00:37:16.915 "workload": "randwrite", 00:37:16.915 "status": "finished", 00:37:16.915 "queue_depth": 128, 00:37:16.915 "io_size": 4096, 00:37:16.915 "runtime": 10.002101, 00:37:16.915 "iops": 14571.038624784933, 00:37:16.915 "mibps": 56.918119628066144, 00:37:16.915 "io_failed": 0, 00:37:16.915 "io_timeout": 0, 00:37:16.915 "avg_latency_us": 8779.78709144863, 00:37:16.915 "min_latency_us": 3155.437037037037, 00:37:16.915 "max_latency_us": 17767.53777777778 00:37:16.915 } 00:37:16.915 ], 00:37:16.915 "core_count": 1 00:37:16.915 } 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4004448 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 4004448 ']' 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 4004448 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4004448 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4004448' 00:37:16.915 killing process with pid 4004448 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 4004448 00:37:16.915 Received shutdown signal, test time was about 10.000000 seconds 00:37:16.915 00:37:16.915 Latency(us) 00:37:16.915 [2024-11-02T10:49:17.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.915 [2024-11-02T10:49:17.317Z] =================================================================================================================== 00:37:16.915 [2024-11-02T10:49:17.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:16.915 11:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 4004448 00:37:16.915 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:17.174 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.432 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:17.432 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:17.690 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:17.690 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:17.690 11:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:17.948 [2024-11-02 11:49:18.281421] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:17.948 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:18.206 request: 00:37:18.206 { 00:37:18.206 "uuid": "b486e55f-4985-4752-9396-d17f21d959dd", 00:37:18.206 "method": "bdev_lvol_get_lvstores", 00:37:18.206 "req_id": 1 00:37:18.206 } 00:37:18.206 Got JSON-RPC error response 00:37:18.206 response: 00:37:18.206 { 00:37:18.206 "code": -19, 00:37:18.206 "message": "No such device" 00:37:18.206 } 00:37:18.206 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:37:18.206 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:18.206 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:18.206 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:18.206 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:18.772 aio_bdev 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:37:18.772 11:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:19.029 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a -t 2000 00:37:19.287 [ 00:37:19.287 { 00:37:19.287 "name": "2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a", 00:37:19.287 "aliases": [ 00:37:19.287 "lvs/lvol" 00:37:19.287 ], 00:37:19.287 "product_name": "Logical Volume", 00:37:19.287 "block_size": 4096, 00:37:19.287 "num_blocks": 38912, 00:37:19.287 "uuid": "2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a", 00:37:19.287 "assigned_rate_limits": { 00:37:19.287 "rw_ios_per_sec": 0, 00:37:19.287 "rw_mbytes_per_sec": 0, 00:37:19.287 "r_mbytes_per_sec": 0, 00:37:19.287 "w_mbytes_per_sec": 0 00:37:19.287 }, 00:37:19.287 "claimed": false, 00:37:19.287 "zoned": false, 00:37:19.287 "supported_io_types": { 00:37:19.287 "read": true, 00:37:19.287 "write": true, 00:37:19.287 "unmap": true, 00:37:19.287 "flush": false, 00:37:19.287 "reset": true, 00:37:19.287 "nvme_admin": false, 00:37:19.287 "nvme_io": false, 00:37:19.287 "nvme_io_md": false, 00:37:19.287 "write_zeroes": true, 00:37:19.287 "zcopy": false, 00:37:19.287 "get_zone_info": false, 00:37:19.287 "zone_management": false, 00:37:19.287 "zone_append": false, 00:37:19.287 "compare": false, 00:37:19.287 "compare_and_write": false, 00:37:19.287 "abort": false, 00:37:19.287 "seek_hole": true, 00:37:19.287 "seek_data": true, 00:37:19.287 "copy": false, 00:37:19.287 "nvme_iov_md": false 00:37:19.287 }, 00:37:19.287 "driver_specific": { 00:37:19.287 "lvol": { 00:37:19.287 "lvol_store_uuid": "b486e55f-4985-4752-9396-d17f21d959dd", 00:37:19.287 "base_bdev": "aio_bdev", 00:37:19.287 "thin_provision": false, 00:37:19.287 "num_allocated_clusters": 38, 00:37:19.287 "snapshot": false, 00:37:19.287 "clone": false, 00:37:19.287 "esnap_clone": false 00:37:19.287 } 00:37:19.287 } 00:37:19.287 } 00:37:19.287 ] 00:37:19.287 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:37:19.287 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:19.288 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:19.546 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:19.546 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:19.546 11:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:19.804 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:19.804 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e6b2f9c-0cc4-461e-90c0-f0e42acb6f3a 00:37:20.063 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b486e55f-4985-4752-9396-d17f21d959dd 00:37:20.321 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:20.580 00:37:20.580 real 0m18.034s 00:37:20.580 user 0m17.653s 00:37:20.580 sys 0m1.874s 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 ************************************ 00:37:20.580 END TEST lvs_grow_clean 00:37:20.580 ************************************ 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:20.580 11:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:20.838 ************************************ 00:37:20.838 START TEST lvs_grow_dirty 00:37:20.838 ************************************ 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:20.838 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:21.097 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:21.097 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:21.356 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:21.356 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:21.356 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:21.613 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:21.613 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:21.613 11:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 lvol 150 00:37:21.871 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:21.871 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:21.871 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:22.130 [2024-11-02 11:49:22.385365] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:22.130 [2024-11-02 11:49:22.385474] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:22.130 true 00:37:22.130 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:22.130 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:22.388 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:22.388 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:22.646 11:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:22.904 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:23.161 [2024-11-02 11:49:23.489688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.161 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4006606 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4006606 /var/tmp/bdevperf.sock 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4006606 ']' 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:23.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:23.419 11:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:23.678 [2024-11-02 11:49:23.837178] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:23.678 [2024-11-02 11:49:23.837306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006606 ] 00:37:23.678 [2024-11-02 11:49:23.904408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.678 [2024-11-02 11:49:23.955286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.678 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:23.678 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:37:23.678 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:24.244 Nvme0n1 00:37:24.244 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:24.502 [ 00:37:24.502 { 00:37:24.502 "name": "Nvme0n1", 00:37:24.502 "aliases": [ 00:37:24.502 "1a668f0c-58ef-4d13-8a25-ecf44a7a0214" 00:37:24.502 ], 00:37:24.502 "product_name": "NVMe disk", 00:37:24.502 "block_size": 4096, 00:37:24.502 "num_blocks": 38912, 00:37:24.502 "uuid": "1a668f0c-58ef-4d13-8a25-ecf44a7a0214", 00:37:24.502 "numa_id": 0, 00:37:24.502 "assigned_rate_limits": { 00:37:24.502 "rw_ios_per_sec": 0, 00:37:24.502 "rw_mbytes_per_sec": 0, 00:37:24.502 "r_mbytes_per_sec": 0, 00:37:24.502 "w_mbytes_per_sec": 0 00:37:24.502 }, 00:37:24.502 "claimed": false, 00:37:24.502 "zoned": false, 00:37:24.502 "supported_io_types": { 00:37:24.502 "read": true, 00:37:24.502 "write": true, 00:37:24.502 "unmap": true, 00:37:24.502 "flush": true, 00:37:24.502 "reset": true, 00:37:24.502 "nvme_admin": true, 00:37:24.502 "nvme_io": true, 00:37:24.502 "nvme_io_md": false, 00:37:24.502 "write_zeroes": true, 00:37:24.502 "zcopy": false, 00:37:24.502 "get_zone_info": false, 00:37:24.503 "zone_management": false, 00:37:24.503 "zone_append": false, 00:37:24.503 "compare": true, 00:37:24.503 "compare_and_write": true, 00:37:24.503 "abort": true, 00:37:24.503 "seek_hole": false, 00:37:24.503 "seek_data": false, 00:37:24.503 "copy": true, 00:37:24.503 "nvme_iov_md": false 00:37:24.503 }, 00:37:24.503 "memory_domains": [ 00:37:24.503 { 00:37:24.503 "dma_device_id": "system", 00:37:24.503 "dma_device_type": 1 00:37:24.503 } 00:37:24.503 ], 00:37:24.503 "driver_specific": { 00:37:24.503 "nvme": [ 00:37:24.503 { 00:37:24.503 "trid": { 00:37:24.503 "trtype": "TCP", 00:37:24.503 "adrfam": "IPv4", 00:37:24.503 "traddr": "10.0.0.2", 00:37:24.503 "trsvcid": "4420", 00:37:24.503 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:24.503 }, 00:37:24.503 "ctrlr_data": { 00:37:24.503 "cntlid": 1, 00:37:24.503 "vendor_id": "0x8086", 00:37:24.503 "model_number": "SPDK bdev Controller", 00:37:24.503 "serial_number": "SPDK0", 00:37:24.503 "firmware_revision": "25.01", 00:37:24.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.503 "oacs": { 00:37:24.503 "security": 0, 00:37:24.503 "format": 0, 00:37:24.503 "firmware": 0, 00:37:24.503 "ns_manage": 0 00:37:24.503 }, 00:37:24.503 "multi_ctrlr": true, 00:37:24.503 "ana_reporting": false 00:37:24.503 }, 00:37:24.503 "vs": { 00:37:24.503 "nvme_version": "1.3" 00:37:24.503 }, 00:37:24.503 "ns_data": { 00:37:24.503 "id": 1, 00:37:24.503 "can_share": true 00:37:24.503 } 00:37:24.503 } 00:37:24.503 ], 00:37:24.503 "mp_policy": "active_passive" 00:37:24.503 } 00:37:24.503 } 00:37:24.503 ] 00:37:24.503 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4006743 00:37:24.503 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:24.503 11:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:24.761 Running I/O for 10 seconds... 00:37:25.696 Latency(us) 00:37:25.696 [2024-11-02T10:49:26.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.696 Nvme0n1 : 1.00 13823.00 54.00 0.00 0.00 0.00 0.00 0.00 00:37:25.696 [2024-11-02T10:49:26.098Z] =================================================================================================================== 00:37:25.696 [2024-11-02T10:49:26.098Z] Total : 13823.00 54.00 0.00 0.00 0.00 0.00 0.00 00:37:25.696 00:37:26.630 11:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:26.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:26.630 Nvme0n1 : 2.00 14266.00 55.73 0.00 0.00 0.00 0.00 0.00 00:37:26.630 [2024-11-02T10:49:27.032Z] =================================================================================================================== 00:37:26.630 [2024-11-02T10:49:27.032Z] Total : 14266.00 55.73 0.00 0.00 0.00 0.00 0.00 00:37:26.630 00:37:26.888 true 00:37:26.888 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:26.888 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:27.146 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:27.146 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:27.146 11:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4006743 00:37:27.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.713 Nvme0n1 : 3.00 14456.00 56.47 0.00 0.00 0.00 0.00 0.00 00:37:27.713 [2024-11-02T10:49:28.115Z] =================================================================================================================== 00:37:27.713 [2024-11-02T10:49:28.115Z] Total : 14456.00 56.47 0.00 0.00 0.00 0.00 0.00 00:37:27.713 00:37:28.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.678 Nvme0n1 : 4.00 14407.75 56.28 0.00 0.00 0.00 0.00 0.00 00:37:28.678 [2024-11-02T10:49:29.080Z] =================================================================================================================== 00:37:28.678 [2024-11-02T10:49:29.080Z] Total : 14407.75 56.28 0.00 0.00 0.00 0.00 0.00 00:37:28.678 00:37:29.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.637 Nvme0n1 : 5.00 14429.00 56.36 0.00 0.00 0.00 0.00 0.00 00:37:29.637 [2024-11-02T10:49:30.039Z] =================================================================================================================== 00:37:29.637 [2024-11-02T10:49:30.039Z] Total : 14429.00 56.36 0.00 0.00 0.00 0.00 0.00 00:37:29.637 00:37:30.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.570 Nvme0n1 : 6.00 14412.67 56.30 0.00 0.00 0.00 0.00 0.00 00:37:30.570 [2024-11-02T10:49:30.972Z] =================================================================================================================== 00:37:30.570 [2024-11-02T10:49:30.972Z] Total : 14412.67 56.30 0.00 0.00 0.00 0.00 0.00 00:37:30.570 00:37:31.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.945 Nvme0n1 : 7.00 14428.43 56.36 0.00 0.00 0.00 0.00 0.00 00:37:31.945 [2024-11-02T10:49:32.347Z] =================================================================================================================== 00:37:31.945 [2024-11-02T10:49:32.347Z] Total : 14428.43 56.36 0.00 0.00 0.00 0.00 0.00 00:37:31.945 00:37:32.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.880 Nvme0n1 : 8.00 14440.12 56.41 0.00 0.00 0.00 0.00 0.00 00:37:32.880 [2024-11-02T10:49:33.282Z] =================================================================================================================== 00:37:32.880 [2024-11-02T10:49:33.282Z] Total : 14440.12 56.41 0.00 0.00 0.00 0.00 0.00 00:37:32.880 00:37:33.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.817 Nvme0n1 : 9.00 14540.00 56.80 0.00 0.00 0.00 0.00 0.00 00:37:33.817 [2024-11-02T10:49:34.219Z] =================================================================================================================== 00:37:33.817 [2024-11-02T10:49:34.219Z] Total : 14540.00 56.80 0.00 0.00 0.00 0.00 0.00 00:37:33.817 00:37:34.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.752 Nvme0n1 : 10.00 14582.70 56.96 0.00 0.00 0.00 0.00 0.00 00:37:34.752 [2024-11-02T10:49:35.154Z] =================================================================================================================== 00:37:34.752 [2024-11-02T10:49:35.154Z] Total : 14582.70 56.96 0.00 0.00 0.00 0.00 0.00 00:37:34.752 00:37:34.752 00:37:34.752 Latency(us) 00:37:34.752 [2024-11-02T10:49:35.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.752 Nvme0n1 : 10.01 14578.37 56.95 0.00 0.00 8773.82 5121.52 19126.80 00:37:34.752 [2024-11-02T10:49:35.154Z] =================================================================================================================== 00:37:34.752 [2024-11-02T10:49:35.154Z] Total : 14578.37 56.95 0.00 0.00 8773.82 5121.52 19126.80 00:37:34.752 { 00:37:34.752 "results": [ 00:37:34.752 { 00:37:34.752 "job": "Nvme0n1", 00:37:34.752 "core_mask": "0x2", 00:37:34.752 "workload": "randwrite", 00:37:34.752 "status": "finished", 00:37:34.752 "queue_depth": 128, 00:37:34.752 "io_size": 4096, 00:37:34.752 "runtime": 10.007358, 00:37:34.752 "iops": 14578.373232975177, 00:37:34.752 "mibps": 56.94677044130928, 00:37:34.752 "io_failed": 0, 00:37:34.752 "io_timeout": 0, 00:37:34.752 "avg_latency_us": 8773.816162396228, 00:37:34.752 "min_latency_us": 5121.517037037037, 00:37:34.752 "max_latency_us": 19126.802962962964 00:37:34.752 } 00:37:34.752 ], 00:37:34.752 "core_count": 1 00:37:34.752 } 00:37:34.752 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4006606 00:37:34.752 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 4006606 ']' 00:37:34.752 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 4006606 00:37:34.752 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:37:34.752 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:34.752 11:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4006606 00:37:34.752 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:34.752 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:34.752 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4006606' 00:37:34.752 killing process with pid 4006606 00:37:34.752 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 4006606 00:37:34.752 Received shutdown signal, test time was about 10.000000 seconds 00:37:34.752 00:37:34.752 Latency(us) 00:37:34.752 [2024-11-02T10:49:35.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.752 [2024-11-02T10:49:35.154Z] =================================================================================================================== 00:37:34.752 [2024-11-02T10:49:35.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:34.752 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 4006606 00:37:35.011 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:35.269 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.527 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:35.527 11:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:35.786 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:35.786 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:35.786 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4004014 00:37:35.786 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4004014 00:37:35.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4004014 Killed "${NVMF_APP[@]}" "$@" 00:37:35.786 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4007956 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4007956 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4007956 ']' 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:35.787 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:35.787 [2024-11-02 11:49:36.138526] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:35.787 [2024-11-02 11:49:36.139724] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:35.787 [2024-11-02 11:49:36.139796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.045 [2024-11-02 11:49:36.221759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.045 [2024-11-02 11:49:36.269787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.045 [2024-11-02 11:49:36.269848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.045 [2024-11-02 11:49:36.269864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.045 [2024-11-02 11:49:36.269877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.045 [2024-11-02 11:49:36.269888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.045 [2024-11-02 11:49:36.270506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.045 [2024-11-02 11:49:36.358735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:36.045 [2024-11-02 11:49:36.359082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.045 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:36.304 [2024-11-02 11:49:36.689303] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:36.304 [2024-11-02 11:49:36.689457] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:36.304 [2024-11-02 11:49:36.689516] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:37:36.562 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:36.820 11:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a668f0c-58ef-4d13-8a25-ecf44a7a0214 -t 2000 00:37:37.078 [ 00:37:37.078 { 00:37:37.079 "name": "1a668f0c-58ef-4d13-8a25-ecf44a7a0214", 00:37:37.079 "aliases": [ 00:37:37.079 "lvs/lvol" 00:37:37.079 ], 00:37:37.079 "product_name": "Logical Volume", 00:37:37.079 "block_size": 4096, 00:37:37.079 "num_blocks": 38912, 00:37:37.079 "uuid": "1a668f0c-58ef-4d13-8a25-ecf44a7a0214", 00:37:37.079 "assigned_rate_limits": { 00:37:37.079 "rw_ios_per_sec": 0, 00:37:37.079 "rw_mbytes_per_sec": 0, 00:37:37.079 "r_mbytes_per_sec": 0, 00:37:37.079 "w_mbytes_per_sec": 0 00:37:37.079 }, 00:37:37.079 "claimed": false, 00:37:37.079 "zoned": false, 00:37:37.079 "supported_io_types": { 00:37:37.079 "read": true, 00:37:37.079 "write": true, 00:37:37.079 "unmap": true, 00:37:37.079 "flush": false, 00:37:37.079 "reset": true, 00:37:37.079 "nvme_admin": false, 00:37:37.079 "nvme_io": false, 00:37:37.079 "nvme_io_md": false, 00:37:37.079 "write_zeroes": true, 00:37:37.079 "zcopy": false, 00:37:37.079 "get_zone_info": false, 00:37:37.079 "zone_management": false, 00:37:37.079 "zone_append": false, 00:37:37.079 "compare": false, 00:37:37.079 "compare_and_write": false, 00:37:37.079 "abort": false, 00:37:37.079 "seek_hole": true, 00:37:37.079 "seek_data": true, 00:37:37.079 "copy": false, 00:37:37.079 "nvme_iov_md": false 00:37:37.079 }, 00:37:37.079 "driver_specific": { 00:37:37.079 "lvol": { 00:37:37.079 "lvol_store_uuid": "b6e7cef5-357c-454d-bf23-c1b2577e3cc4", 00:37:37.079 "base_bdev": "aio_bdev", 00:37:37.079 "thin_provision": false, 00:37:37.079 "num_allocated_clusters": 38, 00:37:37.079 "snapshot": false, 00:37:37.079 "clone": false, 00:37:37.079 "esnap_clone": false 00:37:37.079 } 00:37:37.079 } 00:37:37.079 } 00:37:37.079 ] 00:37:37.079 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:37:37.079 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:37.079 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:37.337 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:37.337 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:37.337 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:37.595 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:37.595 11:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:37.853 [2024-11-02 11:49:38.059122] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:37.853 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:38.112 request: 00:37:38.112 { 00:37:38.112 "uuid": "b6e7cef5-357c-454d-bf23-c1b2577e3cc4", 00:37:38.112 "method": "bdev_lvol_get_lvstores", 00:37:38.112 "req_id": 1 00:37:38.112 } 00:37:38.112 Got JSON-RPC error response 00:37:38.112 response: 00:37:38.112 { 00:37:38.112 "code": -19, 00:37:38.112 "message": "No such device" 00:37:38.112 } 00:37:38.112 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:37:38.112 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:38.112 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:38.112 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:38.112 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:38.370 aio_bdev 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:37:38.370 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:38.629 11:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a668f0c-58ef-4d13-8a25-ecf44a7a0214 -t 2000 00:37:38.887 [ 00:37:38.887 { 00:37:38.887 "name": "1a668f0c-58ef-4d13-8a25-ecf44a7a0214", 00:37:38.887 "aliases": [ 00:37:38.887 "lvs/lvol" 00:37:38.887 ], 00:37:38.887 "product_name": "Logical Volume", 00:37:38.887 "block_size": 4096, 00:37:38.887 "num_blocks": 38912, 00:37:38.887 "uuid": "1a668f0c-58ef-4d13-8a25-ecf44a7a0214", 00:37:38.887 "assigned_rate_limits": { 00:37:38.887 "rw_ios_per_sec": 0, 00:37:38.887 "rw_mbytes_per_sec": 0, 00:37:38.887 "r_mbytes_per_sec": 0, 00:37:38.887 "w_mbytes_per_sec": 0 00:37:38.887 }, 00:37:38.887 "claimed": false, 00:37:38.887 "zoned": false, 00:37:38.887 "supported_io_types": { 00:37:38.887 "read": true, 00:37:38.887 "write": true, 00:37:38.887 "unmap": true, 00:37:38.887 "flush": false, 00:37:38.887 "reset": true, 00:37:38.887 "nvme_admin": false, 00:37:38.887 "nvme_io": false, 00:37:38.887 "nvme_io_md": false, 00:37:38.887 "write_zeroes": true, 00:37:38.887 "zcopy": false, 00:37:38.887 "get_zone_info": false, 00:37:38.887 "zone_management": false, 00:37:38.887 "zone_append": false, 00:37:38.887 "compare": false, 00:37:38.887 "compare_and_write": false, 00:37:38.887 "abort": false, 00:37:38.887 "seek_hole": true, 00:37:38.887 "seek_data": true, 00:37:38.887 "copy": false, 00:37:38.887 "nvme_iov_md": false 00:37:38.887 }, 00:37:38.887 "driver_specific": { 00:37:38.887 "lvol": { 00:37:38.887 "lvol_store_uuid": "b6e7cef5-357c-454d-bf23-c1b2577e3cc4", 00:37:38.887 "base_bdev": "aio_bdev", 00:37:38.887 "thin_provision": false, 00:37:38.887 "num_allocated_clusters": 38, 00:37:38.887 "snapshot": false, 00:37:38.887 "clone": false, 00:37:38.887 "esnap_clone": false 00:37:38.887 } 00:37:38.887 } 00:37:38.887 } 00:37:38.887 ] 00:37:38.887 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:37:38.887 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:38.887 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:39.145 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:39.145 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:39.145 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:39.403 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:39.403 11:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a668f0c-58ef-4d13-8a25-ecf44a7a0214 00:37:39.970 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6e7cef5-357c-454d-bf23-c1b2577e3cc4 00:37:40.228 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:40.486 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:40.487 00:37:40.487 real 0m19.693s 00:37:40.487 user 0m35.174s 00:37:40.487 sys 0m6.172s 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:40.487 ************************************ 00:37:40.487 END TEST lvs_grow_dirty 00:37:40.487 ************************************ 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:40.487 nvmf_trace.0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.487 rmmod nvme_tcp 00:37:40.487 rmmod nvme_fabrics 00:37:40.487 rmmod nvme_keyring 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4007956 ']' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4007956 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 4007956 ']' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 4007956 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4007956 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4007956' 00:37:40.487 killing process with pid 4007956 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 4007956 00:37:40.487 11:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 4007956 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.745 11:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:43.278 00:37:43.278 real 0m43.097s 00:37:43.278 user 0m54.560s 00:37:43.278 sys 0m9.972s 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:43.278 ************************************ 00:37:43.278 END TEST nvmf_lvs_grow 00:37:43.278 ************************************ 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:43.278 ************************************ 00:37:43.278 START TEST nvmf_bdev_io_wait 00:37:43.278 ************************************ 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:43.278 * Looking for test storage... 00:37:43.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:43.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.278 --rc genhtml_branch_coverage=1 00:37:43.278 --rc genhtml_function_coverage=1 00:37:43.278 --rc genhtml_legend=1 00:37:43.278 --rc geninfo_all_blocks=1 00:37:43.278 --rc geninfo_unexecuted_blocks=1 00:37:43.278 00:37:43.278 ' 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:43.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.278 --rc genhtml_branch_coverage=1 00:37:43.278 --rc genhtml_function_coverage=1 00:37:43.278 --rc genhtml_legend=1 00:37:43.278 --rc geninfo_all_blocks=1 00:37:43.278 --rc geninfo_unexecuted_blocks=1 00:37:43.278 00:37:43.278 ' 00:37:43.278 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:43.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.278 --rc genhtml_branch_coverage=1 00:37:43.278 --rc genhtml_function_coverage=1 00:37:43.278 --rc genhtml_legend=1 00:37:43.278 --rc geninfo_all_blocks=1 00:37:43.278 --rc geninfo_unexecuted_blocks=1 00:37:43.278 00:37:43.278 ' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:43.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:43.279 --rc genhtml_branch_coverage=1 00:37:43.279 --rc genhtml_function_coverage=1 00:37:43.279 --rc genhtml_legend=1 00:37:43.279 --rc geninfo_all_blocks=1 00:37:43.279 --rc geninfo_unexecuted_blocks=1 00:37:43.279 00:37:43.279 ' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:43.279 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:37:43.280 11:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:45.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:45.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:45.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:45.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:45.183 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:45.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:37:45.184 00:37:45.184 --- 10.0.0.2 ping statistics --- 00:37:45.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.184 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:45.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:37:45.184 00:37:45.184 --- 10.0.0.1 ping statistics --- 00:37:45.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.184 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4010578 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4010578 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 4010578 ']' 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:45.184 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.184 [2024-11-02 11:49:45.487131] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:45.184 [2024-11-02 11:49:45.488222] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:45.184 [2024-11-02 11:49:45.488297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.184 [2024-11-02 11:49:45.561388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:45.443 [2024-11-02 11:49:45.609623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.443 [2024-11-02 11:49:45.609674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.443 [2024-11-02 11:49:45.609702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.443 [2024-11-02 11:49:45.609713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.443 [2024-11-02 11:49:45.609722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.443 [2024-11-02 11:49:45.611226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.443 [2024-11-02 11:49:45.611292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:45.443 [2024-11-02 11:49:45.611359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:45.443 [2024-11-02 11:49:45.611362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.443 [2024-11-02 11:49:45.611852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.443 [2024-11-02 11:49:45.813396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:45.443 [2024-11-02 11:49:45.813609] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:45.443 [2024-11-02 11:49:45.814508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:45.443 [2024-11-02 11:49:45.815337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.443 [2024-11-02 11:49:45.820088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.443 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.702 Malloc0 00:37:45.702 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.702 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:45.702 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.702 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.702 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.702 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:45.703 [2024-11-02 11:49:45.872210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4010609 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:45.703 { 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme$subsystem", 00:37:45.703 "trtype": "$TEST_TRANSPORT", 00:37:45.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "$NVMF_PORT", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.703 "hdgst": ${hdgst:-false}, 00:37:45.703 "ddgst": ${ddgst:-false} 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 } 00:37:45.703 EOF 00:37:45.703 )") 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4010611 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4010614 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:45.703 { 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme$subsystem", 00:37:45.703 "trtype": "$TEST_TRANSPORT", 00:37:45.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "$NVMF_PORT", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.703 "hdgst": ${hdgst:-false}, 00:37:45.703 "ddgst": ${ddgst:-false} 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 } 00:37:45.703 EOF 00:37:45.703 )") 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4010618 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:45.703 { 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme$subsystem", 00:37:45.703 "trtype": "$TEST_TRANSPORT", 00:37:45.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "$NVMF_PORT", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.703 "hdgst": ${hdgst:-false}, 00:37:45.703 "ddgst": ${ddgst:-false} 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 } 00:37:45.703 EOF 00:37:45.703 )") 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:45.703 { 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme$subsystem", 00:37:45.703 "trtype": "$TEST_TRANSPORT", 00:37:45.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "$NVMF_PORT", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.703 "hdgst": ${hdgst:-false}, 00:37:45.703 "ddgst": ${ddgst:-false} 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 } 00:37:45.703 EOF 00:37:45.703 )") 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4010609 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme1", 00:37:45.703 "trtype": "tcp", 00:37:45.703 "traddr": "10.0.0.2", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "4420", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:45.703 "hdgst": false, 00:37:45.703 "ddgst": false 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 }' 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme1", 00:37:45.703 "trtype": "tcp", 00:37:45.703 "traddr": "10.0.0.2", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "4420", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:45.703 "hdgst": false, 00:37:45.703 "ddgst": false 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 }' 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme1", 00:37:45.703 "trtype": "tcp", 00:37:45.703 "traddr": "10.0.0.2", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "4420", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:45.703 "hdgst": false, 00:37:45.703 "ddgst": false 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 }' 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:45.703 11:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:45.703 "params": { 00:37:45.703 "name": "Nvme1", 00:37:45.703 "trtype": "tcp", 00:37:45.703 "traddr": "10.0.0.2", 00:37:45.703 "adrfam": "ipv4", 00:37:45.703 "trsvcid": "4420", 00:37:45.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:45.703 "hdgst": false, 00:37:45.703 "ddgst": false 00:37:45.703 }, 00:37:45.703 "method": "bdev_nvme_attach_controller" 00:37:45.703 }' 00:37:45.704 [2024-11-02 11:49:45.922181] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:45.704 [2024-11-02 11:49:45.922181] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:45.704 [2024-11-02 11:49:45.922297] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 11:49:45.922298] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:45.704 --proc-type=auto ] 00:37:45.704 [2024-11-02 11:49:45.923893] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:45.704 [2024-11-02 11:49:45.923891] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:45.704 [2024-11-02 11:49:45.923970] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-02 11:49:45.923969] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:37:45.704 --proc-type=auto ] 00:37:45.962 [2024-11-02 11:49:46.106780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.962 [2024-11-02 11:49:46.148615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:45.962 [2024-11-02 11:49:46.204802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.962 [2024-11-02 11:49:46.249309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:45.962 [2024-11-02 11:49:46.279941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.962 [2024-11-02 11:49:46.317729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:45.962 [2024-11-02 11:49:46.353386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.222 [2024-11-02 11:49:46.391791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:46.222 Running I/O for 1 seconds... 00:37:46.222 Running I/O for 1 seconds... 00:37:46.484 Running I/O for 1 seconds... 00:37:46.484 Running I/O for 1 seconds... 00:37:47.449 10633.00 IOPS, 41.54 MiB/s 00:37:47.449 Latency(us) 00:37:47.449 [2024-11-02T10:49:47.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.449 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:37:47.449 Nvme1n1 : 1.01 10685.63 41.74 0.00 0.00 11932.52 2038.90 14369.37 00:37:47.449 [2024-11-02T10:49:47.851Z] =================================================================================================================== 00:37:47.449 [2024-11-02T10:49:47.851Z] Total : 10685.63 41.74 0.00 0.00 11932.52 2038.90 14369.37 00:37:47.449 7716.00 IOPS, 30.14 MiB/s 00:37:47.449 Latency(us) 00:37:47.449 [2024-11-02T10:49:47.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.449 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:37:47.449 Nvme1n1 : 1.01 7759.94 30.31 0.00 0.00 16400.50 4830.25 20486.07 00:37:47.449 [2024-11-02T10:49:47.851Z] =================================================================================================================== 00:37:47.449 [2024-11-02T10:49:47.851Z] Total : 7759.94 30.31 0.00 0.00 16400.50 4830.25 20486.07 00:37:47.449 7515.00 IOPS, 29.36 MiB/s 00:37:47.449 Latency(us) 00:37:47.449 [2024-11-02T10:49:47.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.449 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:37:47.449 Nvme1n1 : 1.01 7594.31 29.67 0.00 0.00 16788.07 3179.71 24660.95 00:37:47.449 [2024-11-02T10:49:47.851Z] =================================================================================================================== 00:37:47.449 [2024-11-02T10:49:47.851Z] Total : 7594.31 29.67 0.00 0.00 16788.07 3179.71 24660.95 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4010611 00:37:47.449 148800.00 IOPS, 581.25 MiB/s 00:37:47.449 Latency(us) 00:37:47.449 [2024-11-02T10:49:47.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.449 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:37:47.449 Nvme1n1 : 1.00 148525.50 580.18 0.00 0.00 857.22 297.34 1844.72 00:37:47.449 [2024-11-02T10:49:47.851Z] =================================================================================================================== 00:37:47.449 [2024-11-02T10:49:47.851Z] Total : 148525.50 580.18 0.00 0.00 857.22 297.34 1844.72 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4010614 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4010618 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.449 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.449 rmmod nvme_tcp 00:37:47.449 rmmod nvme_fabrics 00:37:47.449 rmmod nvme_keyring 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4010578 ']' 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4010578 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 4010578 ']' 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 4010578 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4010578 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4010578' 00:37:47.708 killing process with pid 4010578 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 4010578 00:37:47.708 11:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 4010578 00:37:47.708 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:47.708 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:47.708 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:47.708 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:37:47.708 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.709 11:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.242 00:37:50.242 real 0m6.958s 00:37:50.242 user 0m13.540s 00:37:50.242 sys 0m4.082s 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:50.242 ************************************ 00:37:50.242 END TEST nvmf_bdev_io_wait 00:37:50.242 ************************************ 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:50.242 ************************************ 00:37:50.242 START TEST nvmf_queue_depth 00:37:50.242 ************************************ 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:50.242 * Looking for test storage... 00:37:50.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:50.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.242 --rc genhtml_branch_coverage=1 00:37:50.242 --rc genhtml_function_coverage=1 00:37:50.242 --rc genhtml_legend=1 00:37:50.242 --rc geninfo_all_blocks=1 00:37:50.242 --rc geninfo_unexecuted_blocks=1 00:37:50.242 00:37:50.242 ' 00:37:50.242 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:50.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.242 --rc genhtml_branch_coverage=1 00:37:50.242 --rc genhtml_function_coverage=1 00:37:50.243 --rc genhtml_legend=1 00:37:50.243 --rc geninfo_all_blocks=1 00:37:50.243 --rc geninfo_unexecuted_blocks=1 00:37:50.243 00:37:50.243 ' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:50.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.243 --rc genhtml_branch_coverage=1 00:37:50.243 --rc genhtml_function_coverage=1 00:37:50.243 --rc genhtml_legend=1 00:37:50.243 --rc geninfo_all_blocks=1 00:37:50.243 --rc geninfo_unexecuted_blocks=1 00:37:50.243 00:37:50.243 ' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:50.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.243 --rc genhtml_branch_coverage=1 00:37:50.243 --rc genhtml_function_coverage=1 00:37:50.243 --rc genhtml_legend=1 00:37:50.243 --rc geninfo_all_blocks=1 00:37:50.243 --rc geninfo_unexecuted_blocks=1 00:37:50.243 00:37:50.243 ' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.243 11:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:52.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:52.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:52.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:52.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:52.146 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:52.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:52.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:37:52.147 00:37:52.147 --- 10.0.0.2 ping statistics --- 00:37:52.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.147 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:52.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:52.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:37:52.147 00:37:52.147 --- 10.0.0.1 ping statistics --- 00:37:52.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.147 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4012826 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4012826 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4012826 ']' 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:52.147 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.147 [2024-11-02 11:49:52.530717] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:52.147 [2024-11-02 11:49:52.531777] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:52.147 [2024-11-02 11:49:52.531842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.406 [2024-11-02 11:49:52.607601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.406 [2024-11-02 11:49:52.651448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:52.406 [2024-11-02 11:49:52.651510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:52.406 [2024-11-02 11:49:52.651540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:52.406 [2024-11-02 11:49:52.651552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:52.406 [2024-11-02 11:49:52.651562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:52.406 [2024-11-02 11:49:52.652148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:52.406 [2024-11-02 11:49:52.733773] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:52.406 [2024-11-02 11:49:52.734084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.406 [2024-11-02 11:49:52.788719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.406 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.665 Malloc0 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.665 [2024-11-02 11:49:52.844892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4012845 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4012845 /var/tmp/bdevperf.sock 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4012845 ']' 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:52.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:52.665 11:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.665 [2024-11-02 11:49:52.895954] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:37:52.665 [2024-11-02 11:49:52.896028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012845 ] 00:37:52.665 [2024-11-02 11:49:52.966398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.665 [2024-11-02 11:49:53.014321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:52.924 NVMe0n1 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.924 11:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:52.924 Running I/O for 10 seconds... 00:37:55.235 6158.00 IOPS, 24.05 MiB/s [2024-11-02T10:49:56.572Z] 6681.00 IOPS, 26.10 MiB/s [2024-11-02T10:49:57.572Z] 6826.67 IOPS, 26.67 MiB/s [2024-11-02T10:49:58.508Z] 6773.25 IOPS, 26.46 MiB/s [2024-11-02T10:49:59.444Z] 6762.80 IOPS, 26.42 MiB/s [2024-11-02T10:50:00.379Z] 6827.17 IOPS, 26.67 MiB/s [2024-11-02T10:50:01.753Z] 6875.86 IOPS, 26.86 MiB/s [2024-11-02T10:50:02.689Z] 6877.25 IOPS, 26.86 MiB/s [2024-11-02T10:50:03.624Z] 6843.33 IOPS, 26.73 MiB/s [2024-11-02T10:50:03.624Z] 6861.70 IOPS, 26.80 MiB/s 00:38:03.222 Latency(us) 00:38:03.222 [2024-11-02T10:50:03.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.222 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:03.222 Verification LBA range: start 0x0 length 0x4000 00:38:03.222 NVMe0n1 : 10.12 6881.27 26.88 0.00 0.00 148086.67 30680.56 85439.53 00:38:03.222 [2024-11-02T10:50:03.624Z] =================================================================================================================== 00:38:03.222 [2024-11-02T10:50:03.624Z] Total : 6881.27 26.88 0.00 0.00 148086.67 30680.56 85439.53 00:38:03.222 { 00:38:03.222 "results": [ 00:38:03.222 { 00:38:03.222 "job": "NVMe0n1", 00:38:03.222 "core_mask": "0x1", 00:38:03.222 "workload": "verify", 00:38:03.222 "status": "finished", 00:38:03.222 "verify_range": { 00:38:03.222 "start": 0, 00:38:03.222 "length": 16384 00:38:03.222 }, 00:38:03.222 "queue_depth": 1024, 00:38:03.222 "io_size": 4096, 00:38:03.222 "runtime": 10.120367, 00:38:03.222 "iops": 6881.272191018369, 00:38:03.222 "mibps": 26.879969496165504, 00:38:03.222 "io_failed": 0, 00:38:03.222 "io_timeout": 0, 00:38:03.222 "avg_latency_us": 148086.67299473967, 00:38:03.222 "min_latency_us": 30680.557037037037, 00:38:03.222 "max_latency_us": 85439.52592592593 00:38:03.222 } 00:38:03.222 ], 00:38:03.222 "core_count": 1 00:38:03.222 } 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4012845 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4012845 ']' 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4012845 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4012845 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4012845' 00:38:03.222 killing process with pid 4012845 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4012845 00:38:03.222 Received shutdown signal, test time was about 10.000000 seconds 00:38:03.222 00:38:03.222 Latency(us) 00:38:03.222 [2024-11-02T10:50:03.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.222 [2024-11-02T10:50:03.624Z] =================================================================================================================== 00:38:03.222 [2024-11-02T10:50:03.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:03.222 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4012845 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:03.479 rmmod nvme_tcp 00:38:03.479 rmmod nvme_fabrics 00:38:03.479 rmmod nvme_keyring 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4012826 ']' 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4012826 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4012826 ']' 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4012826 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4012826 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4012826' 00:38:03.479 killing process with pid 4012826 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4012826 00:38:03.479 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4012826 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.737 11:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.737 11:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.737 11:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.737 11:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.737 11:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.737 11:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.271 00:38:06.271 real 0m15.871s 00:38:06.271 user 0m19.109s 00:38:06.271 sys 0m4.477s 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:06.271 ************************************ 00:38:06.271 END TEST nvmf_queue_depth 00:38:06.271 ************************************ 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:06.271 ************************************ 00:38:06.271 START TEST nvmf_target_multipath 00:38:06.271 ************************************ 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:06.271 * Looking for test storage... 00:38:06.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.271 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.272 --rc genhtml_branch_coverage=1 00:38:06.272 --rc genhtml_function_coverage=1 00:38:06.272 --rc genhtml_legend=1 00:38:06.272 --rc geninfo_all_blocks=1 00:38:06.272 --rc geninfo_unexecuted_blocks=1 00:38:06.272 00:38:06.272 ' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.272 --rc genhtml_branch_coverage=1 00:38:06.272 --rc genhtml_function_coverage=1 00:38:06.272 --rc genhtml_legend=1 00:38:06.272 --rc geninfo_all_blocks=1 00:38:06.272 --rc geninfo_unexecuted_blocks=1 00:38:06.272 00:38:06.272 ' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.272 --rc genhtml_branch_coverage=1 00:38:06.272 --rc genhtml_function_coverage=1 00:38:06.272 --rc genhtml_legend=1 00:38:06.272 --rc geninfo_all_blocks=1 00:38:06.272 --rc geninfo_unexecuted_blocks=1 00:38:06.272 00:38:06.272 ' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.272 --rc genhtml_branch_coverage=1 00:38:06.272 --rc genhtml_function_coverage=1 00:38:06.272 --rc genhtml_legend=1 00:38:06.272 --rc geninfo_all_blocks=1 00:38:06.272 --rc geninfo_unexecuted_blocks=1 00:38:06.272 00:38:06.272 ' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:06.272 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.273 11:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:08.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:08.176 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:08.176 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:08.176 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.176 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:08.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:38:08.177 00:38:08.177 --- 10.0.0.2 ping statistics --- 00:38:08.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.177 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:38:08.177 00:38:08.177 --- 10.0.0.1 ping statistics --- 00:38:08.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.177 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:08.177 only one NIC for nvmf test 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.177 rmmod nvme_tcp 00:38:08.177 rmmod nvme_fabrics 00:38:08.177 rmmod nvme_keyring 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:08.177 11:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:10.081 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:10.340 00:38:10.340 real 0m4.395s 00:38:10.340 user 0m0.873s 00:38:10.340 sys 0m1.518s 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:10.340 ************************************ 00:38:10.340 END TEST nvmf_target_multipath 00:38:10.340 ************************************ 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:10.340 ************************************ 00:38:10.340 START TEST nvmf_zcopy 00:38:10.340 ************************************ 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:10.340 * Looking for test storage... 00:38:10.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.340 --rc genhtml_branch_coverage=1 00:38:10.340 --rc genhtml_function_coverage=1 00:38:10.340 --rc genhtml_legend=1 00:38:10.340 --rc geninfo_all_blocks=1 00:38:10.340 --rc geninfo_unexecuted_blocks=1 00:38:10.340 00:38:10.340 ' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.340 --rc genhtml_branch_coverage=1 00:38:10.340 --rc genhtml_function_coverage=1 00:38:10.340 --rc genhtml_legend=1 00:38:10.340 --rc geninfo_all_blocks=1 00:38:10.340 --rc geninfo_unexecuted_blocks=1 00:38:10.340 00:38:10.340 ' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.340 --rc genhtml_branch_coverage=1 00:38:10.340 --rc genhtml_function_coverage=1 00:38:10.340 --rc genhtml_legend=1 00:38:10.340 --rc geninfo_all_blocks=1 00:38:10.340 --rc geninfo_unexecuted_blocks=1 00:38:10.340 00:38:10.340 ' 00:38:10.340 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.340 --rc genhtml_branch_coverage=1 00:38:10.341 --rc genhtml_function_coverage=1 00:38:10.341 --rc genhtml_legend=1 00:38:10.341 --rc geninfo_all_blocks=1 00:38:10.341 --rc geninfo_unexecuted_blocks=1 00:38:10.341 00:38:10.341 ' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:10.341 11:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:12.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:12.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:12.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.874 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:12.875 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:12.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:12.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:38:12.875 00:38:12.875 --- 10.0.0.2 ping statistics --- 00:38:12.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.875 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:12.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:12.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:38:12.875 00:38:12.875 --- 10.0.0.1 ping statistics --- 00:38:12.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.875 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4018018 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4018018 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 4018018 ']' 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:12.875 11:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 [2024-11-02 11:50:12.887287] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:12.875 [2024-11-02 11:50:12.888396] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:38:12.875 [2024-11-02 11:50:12.888469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.875 [2024-11-02 11:50:12.960448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.875 [2024-11-02 11:50:13.004908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.875 [2024-11-02 11:50:13.004959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.875 [2024-11-02 11:50:13.004987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.875 [2024-11-02 11:50:13.004998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.875 [2024-11-02 11:50:13.005007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.875 [2024-11-02 11:50:13.005586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.875 [2024-11-02 11:50:13.087041] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:12.875 [2024-11-02 11:50:13.087368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 [2024-11-02 11:50:13.146182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 [2024-11-02 11:50:13.162410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.876 malloc0 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:12.876 { 00:38:12.876 "params": { 00:38:12.876 "name": "Nvme$subsystem", 00:38:12.876 "trtype": "$TEST_TRANSPORT", 00:38:12.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:12.876 "adrfam": "ipv4", 00:38:12.876 "trsvcid": "$NVMF_PORT", 00:38:12.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:12.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:12.876 "hdgst": ${hdgst:-false}, 00:38:12.876 "ddgst": ${ddgst:-false} 00:38:12.876 }, 00:38:12.876 "method": "bdev_nvme_attach_controller" 00:38:12.876 } 00:38:12.876 EOF 00:38:12.876 )") 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:12.876 11:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:12.876 "params": { 00:38:12.876 "name": "Nvme1", 00:38:12.876 "trtype": "tcp", 00:38:12.876 "traddr": "10.0.0.2", 00:38:12.876 "adrfam": "ipv4", 00:38:12.876 "trsvcid": "4420", 00:38:12.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:12.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:12.876 "hdgst": false, 00:38:12.876 "ddgst": false 00:38:12.876 }, 00:38:12.876 "method": "bdev_nvme_attach_controller" 00:38:12.876 }' 00:38:12.876 [2024-11-02 11:50:13.239411] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:38:12.876 [2024-11-02 11:50:13.239490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018039 ] 00:38:13.134 [2024-11-02 11:50:13.316825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.134 [2024-11-02 11:50:13.366615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.393 Running I/O for 10 seconds... 00:38:15.260 5284.00 IOPS, 41.28 MiB/s [2024-11-02T10:50:17.038Z] 5334.00 IOPS, 41.67 MiB/s [2024-11-02T10:50:17.973Z] 5356.00 IOPS, 41.84 MiB/s [2024-11-02T10:50:18.907Z] 5379.75 IOPS, 42.03 MiB/s [2024-11-02T10:50:19.849Z] 5386.80 IOPS, 42.08 MiB/s [2024-11-02T10:50:20.785Z] 5391.67 IOPS, 42.12 MiB/s [2024-11-02T10:50:21.718Z] 5403.71 IOPS, 42.22 MiB/s [2024-11-02T10:50:22.651Z] 5404.00 IOPS, 42.22 MiB/s [2024-11-02T10:50:24.025Z] 5405.33 IOPS, 42.23 MiB/s [2024-11-02T10:50:24.025Z] 5405.60 IOPS, 42.23 MiB/s 00:38:23.623 Latency(us) 00:38:23.623 [2024-11-02T10:50:24.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.623 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:23.623 Verification LBA range: start 0x0 length 0x1000 00:38:23.623 Nvme1n1 : 10.02 5408.36 42.25 0.00 0.00 23602.42 2985.53 31263.10 00:38:23.623 [2024-11-02T10:50:24.025Z] =================================================================================================================== 00:38:23.623 [2024-11-02T10:50:24.025Z] Total : 5408.36 42.25 0.00 0.00 23602.42 2985.53 31263.10 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4019232 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:23.623 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:23.623 { 00:38:23.623 "params": { 00:38:23.623 "name": "Nvme$subsystem", 00:38:23.623 "trtype": "$TEST_TRANSPORT", 00:38:23.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.623 "adrfam": "ipv4", 00:38:23.623 "trsvcid": "$NVMF_PORT", 00:38:23.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.624 "hdgst": ${hdgst:-false}, 00:38:23.624 "ddgst": ${ddgst:-false} 00:38:23.624 }, 00:38:23.624 "method": "bdev_nvme_attach_controller" 00:38:23.624 } 00:38:23.624 EOF 00:38:23.624 )") 00:38:23.624 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:23.624 [2024-11-02 11:50:23.874108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.874152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:23.624 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:23.624 11:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:23.624 "params": { 00:38:23.624 "name": "Nvme1", 00:38:23.624 "trtype": "tcp", 00:38:23.624 "traddr": "10.0.0.2", 00:38:23.624 "adrfam": "ipv4", 00:38:23.624 "trsvcid": "4420", 00:38:23.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:23.624 "hdgst": false, 00:38:23.624 "ddgst": false 00:38:23.624 }, 00:38:23.624 "method": "bdev_nvme_attach_controller" 00:38:23.624 }' 00:38:23.624 [2024-11-02 11:50:23.882040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.882066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.890039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.890072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.898038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.898062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.906039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.906063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.914037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.914062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.917115] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:38:23.624 [2024-11-02 11:50:23.917184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019232 ] 00:38:23.624 [2024-11-02 11:50:23.922038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.922061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.930037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.930060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.938037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.938060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.946038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.946063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.954037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.954061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.962037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.962060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.970037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.970060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.978037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.978060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.986037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.986061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:23.994001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.624 [2024-11-02 11:50:23.994037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:23.994059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:24.002075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:24.002111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:24.010069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:24.010109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.624 [2024-11-02 11:50:24.018038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.624 [2024-11-02 11:50:24.018062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.026042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.026076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.034038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.034062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.042038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.042061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.047288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.882 [2024-11-02 11:50:24.050040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.050065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.058039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.058063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.066064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.066099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.074067] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.074104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.082076] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.082116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.882 [2024-11-02 11:50:24.090068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.882 [2024-11-02 11:50:24.090118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.098080] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.098119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.106068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.106106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.114059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.114094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.122049] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.122079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.130069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.130109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.138075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.138115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.146038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.146062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.154039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.154064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.162047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.162077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.170045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.170081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.178044] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.178070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.186044] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.186071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.194044] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.194072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.202039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.202065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.210038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.210062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.218038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.218062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.226038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.226062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.234037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.234060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.242044] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.242070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.250038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.250062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.258040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.258066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.266038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.266062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.274038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.274062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:23.883 [2024-11-02 11:50:24.282042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:23.883 [2024-11-02 11:50:24.282068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.290048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.290078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.298040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.298065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.306039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.306063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.314037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.314061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.322038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.322072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.330040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.330066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.338047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.338076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.346041] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.346068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 Running I/O for 5 seconds... 00:38:24.142 [2024-11-02 11:50:24.363170] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.363203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.375191] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.375222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.392743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.392774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.404232] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.404288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.419882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.419911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.434560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.434586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.445269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.445299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.458500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.458525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.469473] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.469500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.482780] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.482809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.142 [2024-11-02 11:50:24.500138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.142 [2024-11-02 11:50:24.500168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.143 [2024-11-02 11:50:24.511583] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.143 [2024-11-02 11:50:24.511627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.143 [2024-11-02 11:50:24.528103] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.143 [2024-11-02 11:50:24.528133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.143 [2024-11-02 11:50:24.543494] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.143 [2024-11-02 11:50:24.543522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.553857] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.553888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.566997] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.567036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.578922] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.578953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.590710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.590740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.602744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.602774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.614712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.614741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.626566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.626609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.637646] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.637676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.650990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.651021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.662968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.662997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.674980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.675010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.691639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.691669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.702365] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.702392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.720769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.720799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.734046] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.734076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.744759] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.744789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.757598] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.757627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.769519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.769546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.781718] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.781749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.402 [2024-11-02 11:50:24.793703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.402 [2024-11-02 11:50:24.793734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.806008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.806039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.817826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.817856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.829825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.829856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.841968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.841998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.854423] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.854450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.866682] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.866712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.877200] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.877230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.889672] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.889702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.901506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.901545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.913382] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.913408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.925766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.925797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.937312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.937339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.949737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.949767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.961988] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.962017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.974316] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.974342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.986091] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.986121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:24.997579] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:24.997608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:25.009529] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:25.009571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:25.021616] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:25.021647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:25.033624] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:25.033653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:25.046011] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:25.046041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.661 [2024-11-02 11:50:25.057999] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.661 [2024-11-02 11:50:25.058029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.071237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.071276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.083208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.083238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.100505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.100532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.111503] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.111531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.124590] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.124619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.139501] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.139543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.150099] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.150128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.162115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.162144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.174090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.174121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.186317] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.186343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.198064] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.198094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.210005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.210034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.222156] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.222183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.233962] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.233993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.245810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.245840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.257676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.257705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.269469] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.269495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.281657] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.281688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.294032] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.294062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.306464] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.306490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.920 [2024-11-02 11:50:25.317382] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.920 [2024-11-02 11:50:25.317417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.330498] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.330526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.342582] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.342611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 10506.00 IOPS, 82.08 MiB/s [2024-11-02T10:50:25.581Z] [2024-11-02 11:50:25.354906] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.354937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.367130] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.367159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.384073] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.384103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.395528] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.395570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.408879] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.408909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.422740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.422770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.433330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.179 [2024-11-02 11:50:25.433358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.179 [2024-11-02 11:50:25.446602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.446628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.458146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.458176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.471496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.471523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.483601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.483630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.495964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.496003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.508268] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.508312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.524521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.524565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.535316] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.535343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.548363] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.548390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.562840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.562870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.180 [2024-11-02 11:50:25.573671] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.180 [2024-11-02 11:50:25.573700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.587671] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.587702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.600207] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.600237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.617056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.617086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.628833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.628863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.641514] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.641553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.653447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.653472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.667451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.667478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.677438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.677467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.690386] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.690413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.707931] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.707960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.721747] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.721777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.732215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.732245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.745230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.745278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.759969] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.759998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.776664] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.776694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.787486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.787513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.800990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.801020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.815168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.815198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.825467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.825494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.439 [2024-11-02 11:50:25.838791] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.439 [2024-11-02 11:50:25.838822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.849730] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.849760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.862098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.862128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.874151] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.874180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.886177] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.886208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.897857] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.897887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.910249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.910287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.921677] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.921708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.933692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.933722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.945454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.945481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.957155] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.957185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.968978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.969008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.982762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.982801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:25.993406] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:25.993433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.006364] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.006391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.018435] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.018461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.030599] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.030630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.048291] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.048334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.063071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.063100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.073564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.073602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.086342] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.086369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.698 [2024-11-02 11:50:26.098453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.698 [2024-11-02 11:50:26.098480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.116279] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.116323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.128198] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.128231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.143978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.144008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.159529] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.159573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.170085] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.170115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.183178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.183208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.195179] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.195209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.207318] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.207345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.219323] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.219353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.231243] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.231309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.243573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.243615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.255157] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.255187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.267080] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.267110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.278762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.278793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.290091] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.290121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.301864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.301895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.314021] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.314051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.326481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.326512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.337068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.337097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 [2024-11-02 11:50:26.350230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.350271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.986 10519.00 IOPS, 82.18 MiB/s [2024-11-02T10:50:26.388Z] [2024-11-02 11:50:26.362283] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.986 [2024-11-02 11:50:26.362310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.373865] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.373892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.385658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.385685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.398649] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.398691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.408868] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.408899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.421348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.421375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.436138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.436167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.451383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.451411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.461292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.461319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.473972] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.473999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.485786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.485816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.497940] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.497972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.509991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.510020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.522265] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.522295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.534660] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.534690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.546063] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.546092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.557786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.557822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.569855] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.569885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.581819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.581849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.594118] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.594149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.606470] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.606496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.617374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.617401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.630603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.630647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.641605] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.641650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.654955] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.654985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.269 [2024-11-02 11:50:26.667060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.269 [2024-11-02 11:50:26.667091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.679149] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.679179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.695638] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.695668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.706387] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.706413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.719096] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.719125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.730975] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.731005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.742655] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.742686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.759340] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.759367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.769918] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.528 [2024-11-02 11:50:26.769950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.528 [2024-11-02 11:50:26.782713] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.782743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.793837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.793867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.806946] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.806976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.824242] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.824280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.835573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.835603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.848285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.848329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.863531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.863573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.874081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.874111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.887146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.887175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.904025] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.904055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.917624] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.917654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.529 [2024-11-02 11:50:26.927863] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.529 [2024-11-02 11:50:26.927904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:26.944374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:26.944403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:26.957841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:26.957871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:26.968388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:26.968414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:26.980950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:26.980980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:26.993061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:26.993090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.006919] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.006949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.017765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.017794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.030726] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.030755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.042631] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.042660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.053136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.053166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.065966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.065996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.077752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.077782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.089766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.089796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.101664] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.101694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.113432] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.113458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.129282] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.129312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.140417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.140444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.155240] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.155281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.165485] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.165519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.788 [2024-11-02 11:50:27.178533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.788 [2024-11-02 11:50:27.178574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.191046] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.191076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.209244] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.209282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.219966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.219996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.232294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.232337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.246577] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.246602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.257434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.257460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.270323] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.270351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.281963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.281993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.294199] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.294229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.305867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.305897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.317399] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.317426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.329308] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.329335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.341287] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.341330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.353175] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.353205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 10570.00 IOPS, 82.58 MiB/s [2024-11-02T10:50:27.449Z] [2024-11-02 11:50:27.364752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.364782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.378821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.378851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.388183] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.388213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.401247] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.401308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.413105] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.413135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.425527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.425570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.047 [2024-11-02 11:50:27.437148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.047 [2024-11-02 11:50:27.437178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.449537] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.449592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.461766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.461797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.473640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.473669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.485666] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.485696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.497776] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.497808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.509713] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.509744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.521665] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.521696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.533554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.533600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.545758] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.545788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.557630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.557660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.569341] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.569368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.582866] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.582897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.592548] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.592602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.605717] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.605748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.617406] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.617434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.630061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.630101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.642160] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.642190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.654409] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.654435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.666114] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.666144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.677876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.677905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.689890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.689920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.306 [2024-11-02 11:50:27.702204] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.306 [2024-11-02 11:50:27.702234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.714899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.714930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.726880] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.726909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.739202] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.739231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.755950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.755981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.770429] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.770455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.781162] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.781192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.794658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.794688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.804995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.805024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.818223] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.818253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.830549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.830574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.841799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.841830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.855206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.855236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.866770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.866800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.878666] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.878695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.889304] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.889331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.902303] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.902345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.913909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.913938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.925770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.925800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.937346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.937371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.949556] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.949580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.566 [2024-11-02 11:50:27.961669] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.566 [2024-11-02 11:50:27.961699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:27.974745] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:27.974776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:27.986388] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:27.986416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:27.999358] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:27.999383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.011008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.011038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.023019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.023048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.035561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.035602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.047768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.047798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.059683] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.059713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.071758] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.071788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.083672] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.083702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.095971] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.096001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.112785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.112814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.124434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.124460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.136993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.137022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.150882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.150912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.161346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.161372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.174400] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.174426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.192040] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.192069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.203596] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.203638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.825 [2024-11-02 11:50:28.219677] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.825 [2024-11-02 11:50:28.219707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.234527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.234554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.244867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.244897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.258273] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.258316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.269715] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.269745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.281584] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.281614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.293492] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.293519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.305720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.305749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.318126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.318156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.330326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.330353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.342242] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.342279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.354344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.354371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 10569.75 IOPS, 82.58 MiB/s [2024-11-02T10:50:28.486Z] [2024-11-02 11:50:28.366345] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.366371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.378384] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.378409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.390442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.390467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.402404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.402430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.419554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.419580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.084 [2024-11-02 11:50:28.430794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.084 [2024-11-02 11:50:28.430824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.085 [2024-11-02 11:50:28.446354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.085 [2024-11-02 11:50:28.446380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.085 [2024-11-02 11:50:28.457599] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.085 [2024-11-02 11:50:28.457628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.085 [2024-11-02 11:50:28.469942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.085 [2024-11-02 11:50:28.469970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.085 [2024-11-02 11:50:28.481195] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.085 [2024-11-02 11:50:28.481225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.493746] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.493776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.506311] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.506338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.518362] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.518389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.535954] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.535983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.550807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.550837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.561476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.561503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.574697] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.574736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.343 [2024-11-02 11:50:28.587637] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.343 [2024-11-02 11:50:28.587667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.604374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.604401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.616078] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.616109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.631378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.631407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.642171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.642202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.655232] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.655272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.667737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.667767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.679912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.679943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.692361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.692387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.706927] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.706958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.717894] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.717925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.731174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.731204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.344 [2024-11-02 11:50:28.743664] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.344 [2024-11-02 11:50:28.743694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.758129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.758160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.768608] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.768644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.781887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.781917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.793501] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.793528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.806143] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.806174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.818127] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.818166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.830164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.830194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.842174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.842204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.853836] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.853866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.865584] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.865614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.877509] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.877535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.888980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.889010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.900846] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.900876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.912602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.912631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.926532] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.926558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.936197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.936227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.949407] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.949431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.961222] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.961269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.974088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.974118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.984618] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.984662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.603 [2024-11-02 11:50:28.997490] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.603 [2024-11-02 11:50:28.997517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.010024] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.010054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.022065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.022095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.034326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.034352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.046182] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.046220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.057938] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.057970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.070694] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.070724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.081838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.081868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.094790] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.094820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.107079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.107109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.124332] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.124359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.137442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.137469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.148033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.148063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.164526] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.164555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.178706] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.178737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.189442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.189469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.202433] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.202460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.214293] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.214338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.226061] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.226090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.237841] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.237872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.249351] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.249378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.862 [2024-11-02 11:50:29.261916] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.862 [2024-11-02 11:50:29.261949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.274314] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.274341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.286098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.286128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.298119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.298148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.309750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.309780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.321609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.321646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.333719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.333749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.345567] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.345610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.357899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.357929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 10568.60 IOPS, 82.57 MiB/s [2024-11-02T10:50:29.523Z] [2024-11-02 11:50:29.369514] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.369540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.374048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.374076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 00:38:29.121 Latency(us) 00:38:29.121 [2024-11-02T10:50:29.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.121 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:29.121 Nvme1n1 : 5.01 10569.47 82.57 0.00 0.00 12092.29 3203.98 19418.07 00:38:29.121 [2024-11-02T10:50:29.523Z] =================================================================================================================== 00:38:29.121 [2024-11-02T10:50:29.523Z] Total : 10569.47 82.57 0.00 0.00 12092.29 3203.98 19418.07 00:38:29.121 [2024-11-02 11:50:29.382051] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.382079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.390045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.390073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.398089] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.398139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.406098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.406148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.414083] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.414130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.422083] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.422130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.430088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.430136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.438084] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.438133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.446087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.446137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.454113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.454163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.462087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.462136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.470088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.470136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.478095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.478138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.121 [2024-11-02 11:50:29.486087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.121 [2024-11-02 11:50:29.486131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.122 [2024-11-02 11:50:29.494084] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.122 [2024-11-02 11:50:29.494128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.122 [2024-11-02 11:50:29.502055] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.122 [2024-11-02 11:50:29.502086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.122 [2024-11-02 11:50:29.510043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.122 [2024-11-02 11:50:29.510068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.122 [2024-11-02 11:50:29.518085] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.122 [2024-11-02 11:50:29.518127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.381 [2024-11-02 11:50:29.526119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.381 [2024-11-02 11:50:29.526165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.381 [2024-11-02 11:50:29.534055] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.381 [2024-11-02 11:50:29.534085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.381 [2024-11-02 11:50:29.542039] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.381 [2024-11-02 11:50:29.542064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.381 [2024-11-02 11:50:29.550037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.381 [2024-11-02 11:50:29.550061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4019232) - No such process 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4019232 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.381 delay0 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.381 11:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:29.382 [2024-11-02 11:50:29.633701] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:37.496 Initializing NVMe Controllers 00:38:37.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:37.496 Initialization complete. Launching workers. 00:38:37.496 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 217, failed: 23548 00:38:37.496 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23615, failed to submit 150 00:38:37.496 success 23548, unsuccessful 67, failed 0 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:37.496 rmmod nvme_tcp 00:38:37.496 rmmod nvme_fabrics 00:38:37.496 rmmod nvme_keyring 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4018018 ']' 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4018018 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 4018018 ']' 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 4018018 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4018018 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4018018' 00:38:37.496 killing process with pid 4018018 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 4018018 00:38:37.496 11:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 4018018 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.496 11:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.872 00:38:38.872 real 0m28.628s 00:38:38.872 user 0m39.976s 00:38:38.872 sys 0m10.831s 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:38.872 ************************************ 00:38:38.872 END TEST nvmf_zcopy 00:38:38.872 ************************************ 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:38.872 ************************************ 00:38:38.872 START TEST nvmf_nmic 00:38:38.872 ************************************ 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:38.872 * Looking for test storage... 00:38:38.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:38:38.872 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:39.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.132 --rc genhtml_branch_coverage=1 00:38:39.132 --rc genhtml_function_coverage=1 00:38:39.132 --rc genhtml_legend=1 00:38:39.132 --rc geninfo_all_blocks=1 00:38:39.132 --rc geninfo_unexecuted_blocks=1 00:38:39.132 00:38:39.132 ' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:39.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.132 --rc genhtml_branch_coverage=1 00:38:39.132 --rc genhtml_function_coverage=1 00:38:39.132 --rc genhtml_legend=1 00:38:39.132 --rc geninfo_all_blocks=1 00:38:39.132 --rc geninfo_unexecuted_blocks=1 00:38:39.132 00:38:39.132 ' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:39.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.132 --rc genhtml_branch_coverage=1 00:38:39.132 --rc genhtml_function_coverage=1 00:38:39.132 --rc genhtml_legend=1 00:38:39.132 --rc geninfo_all_blocks=1 00:38:39.132 --rc geninfo_unexecuted_blocks=1 00:38:39.132 00:38:39.132 ' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:39.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.132 --rc genhtml_branch_coverage=1 00:38:39.132 --rc genhtml_function_coverage=1 00:38:39.132 --rc genhtml_legend=1 00:38:39.132 --rc geninfo_all_blocks=1 00:38:39.132 --rc geninfo_unexecuted_blocks=1 00:38:39.132 00:38:39.132 ' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:39.132 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:39.133 11:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:41.035 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:41.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:41.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:41.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:41.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:38:41.036 00:38:41.036 --- 10.0.0.2 ping statistics --- 00:38:41.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.036 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:38:41.036 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:38:41.295 00:38:41.295 --- 10.0.0.1 ping statistics --- 00:38:41.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.295 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4022726 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4022726 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 4022726 ']' 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:41.295 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.295 [2024-11-02 11:50:41.512608] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:41.295 [2024-11-02 11:50:41.513699] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:38:41.295 [2024-11-02 11:50:41.513756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.295 [2024-11-02 11:50:41.593200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:41.296 [2024-11-02 11:50:41.645121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.296 [2024-11-02 11:50:41.645186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.296 [2024-11-02 11:50:41.645203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.296 [2024-11-02 11:50:41.645216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.296 [2024-11-02 11:50:41.645228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.296 [2024-11-02 11:50:41.646864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.296 [2024-11-02 11:50:41.646917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:41.296 [2024-11-02 11:50:41.647032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:41.296 [2024-11-02 11:50:41.647035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.554 [2024-11-02 11:50:41.737468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:41.554 [2024-11-02 11:50:41.737650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:41.554 [2024-11-02 11:50:41.737956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:41.554 [2024-11-02 11:50:41.738583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:41.554 [2024-11-02 11:50:41.738841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:41.554 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 [2024-11-02 11:50:41.791715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 Malloc0 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 [2024-11-02 11:50:41.855914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:41.555 test case1: single bdev can't be used in multiple subsystems 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 [2024-11-02 11:50:41.879637] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:41.555 [2024-11-02 11:50:41.879666] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:41.555 [2024-11-02 11:50:41.879680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.555 request: 00:38:41.555 { 00:38:41.555 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:41.555 "namespace": { 00:38:41.555 "bdev_name": "Malloc0", 00:38:41.555 "no_auto_visible": false 00:38:41.555 }, 00:38:41.555 "method": "nvmf_subsystem_add_ns", 00:38:41.555 "req_id": 1 00:38:41.555 } 00:38:41.555 Got JSON-RPC error response 00:38:41.555 response: 00:38:41.555 { 00:38:41.555 "code": -32602, 00:38:41.555 "message": "Invalid parameters" 00:38:41.555 } 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:41.555 Adding namespace failed - expected result. 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:41.555 test case2: host connect to nvmf target in multiple paths 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:41.555 [2024-11-02 11:50:41.887746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.555 11:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:41.813 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:41.813 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:41.813 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:38:41.813 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:38:41.813 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:38:41.813 11:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:38:44.338 11:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:44.338 [global] 00:38:44.338 thread=1 00:38:44.338 invalidate=1 00:38:44.338 rw=write 00:38:44.338 time_based=1 00:38:44.338 runtime=1 00:38:44.338 ioengine=libaio 00:38:44.338 direct=1 00:38:44.338 bs=4096 00:38:44.338 iodepth=1 00:38:44.338 norandommap=0 00:38:44.338 numjobs=1 00:38:44.338 00:38:44.338 verify_dump=1 00:38:44.338 verify_backlog=512 00:38:44.338 verify_state_save=0 00:38:44.338 do_verify=1 00:38:44.338 verify=crc32c-intel 00:38:44.338 [job0] 00:38:44.338 filename=/dev/nvme0n1 00:38:44.338 Could not set queue depth (nvme0n1) 00:38:44.338 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:44.338 fio-3.35 00:38:44.338 Starting 1 thread 00:38:45.273 00:38:45.273 job0: (groupid=0, jobs=1): err= 0: pid=4023108: Sat Nov 2 11:50:45 2024 00:38:45.273 read: IOPS=195, BW=781KiB/s (800kB/s)(804KiB/1029msec) 00:38:45.273 slat (nsec): min=7372, max=34681, avg=10831.14, stdev=6665.20 00:38:45.273 clat (usec): min=276, max=41011, avg=4338.41, stdev=12197.25 00:38:45.273 lat (usec): min=285, max=41036, avg=4349.24, stdev=12202.70 00:38:45.273 clat percentiles (usec): 00:38:45.273 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:38:45.273 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 293], 60.00th=[ 297], 00:38:45.273 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 429], 95.00th=[41157], 00:38:45.273 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:45.273 | 99.99th=[41157] 00:38:45.273 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:38:45.273 slat (usec): min=9, max=28556, avg=76.72, stdev=1261.11 00:38:45.273 clat (usec): min=185, max=345, avg=219.03, stdev=18.59 00:38:45.273 lat (usec): min=196, max=28901, avg=295.75, stdev=1266.88 00:38:45.273 clat percentiles (usec): 00:38:45.273 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 210], 00:38:45.273 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:38:45.273 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 249], 00:38:45.273 | 99.00th=[ 285], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 347], 00:38:45.273 | 99.99th=[ 347] 00:38:45.273 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:45.273 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:45.273 lat (usec) : 250=68.44%, 500=28.75% 00:38:45.273 lat (msec) : 50=2.81% 00:38:45.273 cpu : usr=1.07%, sys=1.36%, ctx=716, majf=0, minf=1 00:38:45.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.273 issued rwts: total=201,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:45.273 00:38:45.273 Run status group 0 (all jobs): 00:38:45.273 READ: bw=781KiB/s (800kB/s), 781KiB/s-781KiB/s (800kB/s-800kB/s), io=804KiB (823kB), run=1029-1029msec 00:38:45.273 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:38:45.273 00:38:45.273 Disk stats (read/write): 00:38:45.273 nvme0n1: ios=249/512, merge=0/0, ticks=1021/95, in_queue=1116, util=98.70% 00:38:45.273 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:45.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:45.532 rmmod nvme_tcp 00:38:45.532 rmmod nvme_fabrics 00:38:45.532 rmmod nvme_keyring 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4022726 ']' 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4022726 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 4022726 ']' 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 4022726 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4022726 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4022726' 00:38:45.532 killing process with pid 4022726 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 4022726 00:38:45.532 11:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 4022726 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.791 11:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:48.328 00:38:48.328 real 0m8.935s 00:38:48.328 user 0m16.807s 00:38:48.328 sys 0m3.327s 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.328 ************************************ 00:38:48.328 END TEST nvmf_nmic 00:38:48.328 ************************************ 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:48.328 ************************************ 00:38:48.328 START TEST nvmf_fio_target 00:38:48.328 ************************************ 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:48.328 * Looking for test storage... 00:38:48.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:48.328 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:48.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.329 --rc genhtml_branch_coverage=1 00:38:48.329 --rc genhtml_function_coverage=1 00:38:48.329 --rc genhtml_legend=1 00:38:48.329 --rc geninfo_all_blocks=1 00:38:48.329 --rc geninfo_unexecuted_blocks=1 00:38:48.329 00:38:48.329 ' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:48.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.329 --rc genhtml_branch_coverage=1 00:38:48.329 --rc genhtml_function_coverage=1 00:38:48.329 --rc genhtml_legend=1 00:38:48.329 --rc geninfo_all_blocks=1 00:38:48.329 --rc geninfo_unexecuted_blocks=1 00:38:48.329 00:38:48.329 ' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:48.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.329 --rc genhtml_branch_coverage=1 00:38:48.329 --rc genhtml_function_coverage=1 00:38:48.329 --rc genhtml_legend=1 00:38:48.329 --rc geninfo_all_blocks=1 00:38:48.329 --rc geninfo_unexecuted_blocks=1 00:38:48.329 00:38:48.329 ' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:48.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.329 --rc genhtml_branch_coverage=1 00:38:48.329 --rc genhtml_function_coverage=1 00:38:48.329 --rc genhtml_legend=1 00:38:48.329 --rc geninfo_all_blocks=1 00:38:48.329 --rc geninfo_unexecuted_blocks=1 00:38:48.329 00:38:48.329 ' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.329 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.330 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.330 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:48.330 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:48.330 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:48.330 11:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:50.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:50.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:50.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:50.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:50.233 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:50.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:50.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:38:50.234 00:38:50.234 --- 10.0.0.2 ping statistics --- 00:38:50.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.234 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:50.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:50.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:38:50.234 00:38:50.234 --- 10.0.0.1 ping statistics --- 00:38:50.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.234 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4025183 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4025183 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 4025183 ']' 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:50.234 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:50.234 [2024-11-02 11:50:50.465998] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:50.234 [2024-11-02 11:50:50.467173] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:38:50.234 [2024-11-02 11:50:50.467241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:50.234 [2024-11-02 11:50:50.547233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:50.234 [2024-11-02 11:50:50.595166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:50.234 [2024-11-02 11:50:50.595220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:50.234 [2024-11-02 11:50:50.595264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:50.234 [2024-11-02 11:50:50.595276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:50.234 [2024-11-02 11:50:50.595286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:50.234 [2024-11-02 11:50:50.597011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.234 [2024-11-02 11:50:50.597035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:50.234 [2024-11-02 11:50:50.597095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:50.234 [2024-11-02 11:50:50.597098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.493 [2024-11-02 11:50:50.682962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:50.493 [2024-11-02 11:50:50.683177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:50.493 [2024-11-02 11:50:50.683460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:50.493 [2024-11-02 11:50:50.684024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:50.493 [2024-11-02 11:50:50.684292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.493 11:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:50.751 [2024-11-02 11:50:51.005908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.751 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:51.010 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:38:51.010 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:51.268 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:38:51.268 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:51.527 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:38:51.527 11:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:52.094 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:38:52.094 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:38:52.353 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:52.611 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:52.611 11:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:52.869 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:52.869 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.127 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:53.127 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:53.385 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:53.643 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:53.643 11:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:53.901 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:53.901 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:54.159 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.417 [2024-11-02 11:50:54.766004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.417 11:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:54.674 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:38:55.240 11:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:38:57.137 11:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:57.137 [global] 00:38:57.137 thread=1 00:38:57.137 invalidate=1 00:38:57.137 rw=write 00:38:57.137 time_based=1 00:38:57.137 runtime=1 00:38:57.137 ioengine=libaio 00:38:57.137 direct=1 00:38:57.137 bs=4096 00:38:57.137 iodepth=1 00:38:57.137 norandommap=0 00:38:57.137 numjobs=1 00:38:57.137 00:38:57.137 verify_dump=1 00:38:57.137 verify_backlog=512 00:38:57.137 verify_state_save=0 00:38:57.137 do_verify=1 00:38:57.137 verify=crc32c-intel 00:38:57.137 [job0] 00:38:57.137 filename=/dev/nvme0n1 00:38:57.137 [job1] 00:38:57.137 filename=/dev/nvme0n2 00:38:57.137 [job2] 00:38:57.137 filename=/dev/nvme0n3 00:38:57.137 [job3] 00:38:57.137 filename=/dev/nvme0n4 00:38:57.395 Could not set queue depth (nvme0n1) 00:38:57.395 Could not set queue depth (nvme0n2) 00:38:57.395 Could not set queue depth (nvme0n3) 00:38:57.395 Could not set queue depth (nvme0n4) 00:38:57.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:57.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:57.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:57.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:57.395 fio-3.35 00:38:57.395 Starting 4 threads 00:38:58.853 00:38:58.853 job0: (groupid=0, jobs=1): err= 0: pid=4026247: Sat Nov 2 11:50:58 2024 00:38:58.853 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:38:58.853 slat (nsec): min=5818, max=71566, avg=18822.33, stdev=9464.07 00:38:58.853 clat (usec): min=286, max=939, avg=497.47, stdev=108.66 00:38:58.853 lat (usec): min=295, max=970, avg=516.29, stdev=111.33 00:38:58.853 clat percentiles (usec): 00:38:58.853 | 1.00th=[ 306], 5.00th=[ 338], 10.00th=[ 367], 20.00th=[ 412], 00:38:58.853 | 30.00th=[ 437], 40.00th=[ 469], 50.00th=[ 490], 60.00th=[ 510], 00:38:58.853 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 635], 95.00th=[ 693], 00:38:58.853 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:38:58.853 | 99.99th=[ 938] 00:38:58.853 write: IOPS=1288, BW=5155KiB/s (5279kB/s)(5160KiB/1001msec); 0 zone resets 00:38:58.853 slat (nsec): min=9615, max=87063, avg=23606.25, stdev=10123.64 00:38:58.853 clat (usec): min=188, max=2076, avg=331.10, stdev=101.77 00:38:58.853 lat (usec): min=204, max=2090, avg=354.71, stdev=102.75 00:38:58.853 clat percentiles (usec): 00:38:58.853 | 1.00th=[ 208], 5.00th=[ 235], 10.00th=[ 247], 20.00th=[ 260], 00:38:58.853 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:38:58.853 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 433], 95.00th=[ 474], 00:38:58.853 | 99.00th=[ 578], 99.50th=[ 676], 99.90th=[ 1287], 99.95th=[ 2073], 00:38:58.853 | 99.99th=[ 2073] 00:38:58.853 bw ( KiB/s): min= 4710, max= 4710, per=30.66%, avg=4710.00, stdev= 0.00, samples=1 00:38:58.853 iops : min= 1177, max= 1177, avg=1177.00, stdev= 0.00, samples=1 00:38:58.853 lat (usec) : 250=7.00%, 500=71.05%, 750=20.35%, 1000=1.43% 00:38:58.853 lat (msec) : 2=0.13%, 4=0.04% 00:38:58.853 cpu : usr=4.50%, sys=5.80%, ctx=2315, majf=0, minf=1 00:38:58.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.853 issued rwts: total=1024,1290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:58.853 job1: (groupid=0, jobs=1): err= 0: pid=4026248: Sat Nov 2 11:50:58 2024 00:38:58.853 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:38:58.853 slat (nsec): min=14499, max=39751, avg=21383.19, stdev=6164.00 00:38:58.853 clat (usec): min=40501, max=42013, avg=41709.09, stdev=493.46 00:38:58.853 lat (usec): min=40515, max=42036, avg=41730.48, stdev=495.24 00:38:58.853 clat percentiles (usec): 00:38:58.853 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:58.853 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:38:58.853 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:58.853 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:58.853 | 99.99th=[42206] 00:38:58.853 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:38:58.853 slat (nsec): min=6830, max=45070, avg=14694.64, stdev=5774.21 00:38:58.853 clat (usec): min=188, max=326, avg=223.75, stdev=19.28 00:38:58.853 lat (usec): min=198, max=341, avg=238.44, stdev=19.69 00:38:58.853 clat percentiles (usec): 00:38:58.853 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:38:58.853 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:38:58.853 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 258], 00:38:58.853 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 326], 99.95th=[ 326], 00:38:58.853 | 99.99th=[ 326] 00:38:58.853 bw ( KiB/s): min= 4087, max= 4087, per=26.61%, avg=4087.00, stdev= 0.00, samples=1 00:38:58.853 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:38:58.853 lat (usec) : 250=88.37%, 500=7.69% 00:38:58.853 lat (msec) : 50=3.94% 00:38:58.853 cpu : usr=0.50%, sys=0.60%, ctx=534, majf=0, minf=1 00:38:58.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.853 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:58.853 job2: (groupid=0, jobs=1): err= 0: pid=4026249: Sat Nov 2 11:50:58 2024 00:38:58.853 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:38:58.853 slat (nsec): min=6400, max=50413, avg=14636.46, stdev=7047.84 00:38:58.853 clat (usec): min=298, max=41190, avg=1592.77, stdev=6850.09 00:38:58.853 lat (usec): min=307, max=41221, avg=1607.41, stdev=6851.60 00:38:58.853 clat percentiles (usec): 00:38:58.853 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 334], 00:38:58.853 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 383], 00:38:58.853 | 70.00th=[ 412], 80.00th=[ 482], 90.00th=[ 586], 95.00th=[ 742], 00:38:58.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:58.853 | 99.99th=[41157] 00:38:58.853 write: IOPS=658, BW=2633KiB/s (2697kB/s)(2636KiB/1001msec); 0 zone resets 00:38:58.853 slat (usec): min=7, max=1115, avg=19.94, stdev=43.38 00:38:58.853 clat (usec): min=193, max=450, avg=241.05, stdev=31.26 00:38:58.853 lat (usec): min=206, max=1396, avg=260.99, stdev=55.44 00:38:58.853 clat percentiles (usec): 00:38:58.853 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 217], 00:38:58.853 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:38:58.853 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:38:58.853 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 453], 99.95th=[ 453], 00:38:58.853 | 99.99th=[ 453] 00:38:58.853 bw ( KiB/s): min= 4087, max= 4087, per=26.61%, avg=4087.00, stdev= 0.00, samples=1 00:38:58.853 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:38:58.853 lat (usec) : 250=38.17%, 500=54.40%, 750=5.47%, 1000=0.68% 00:38:58.853 lat (msec) : 50=1.28% 00:38:58.853 cpu : usr=1.20%, sys=2.30%, ctx=1173, majf=0, minf=1 00:38:58.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.854 issued rwts: total=512,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:58.854 job3: (groupid=0, jobs=1): err= 0: pid=4026250: Sat Nov 2 11:50:58 2024 00:38:58.854 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:38:58.854 slat (nsec): min=5879, max=69357, avg=22664.26, stdev=10765.37 00:38:58.854 clat (usec): min=292, max=831, avg=462.60, stdev=70.19 00:38:58.854 lat (usec): min=303, max=849, avg=485.26, stdev=72.74 00:38:58.854 clat percentiles (usec): 00:38:58.854 | 1.00th=[ 310], 5.00th=[ 343], 10.00th=[ 367], 20.00th=[ 404], 00:38:58.854 | 30.00th=[ 433], 40.00th=[ 453], 50.00th=[ 469], 60.00th=[ 486], 00:38:58.854 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 570], 00:38:58.854 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 791], 99.95th=[ 832], 00:38:58.854 | 99.99th=[ 832] 00:38:58.854 write: IOPS=1381, BW=5526KiB/s (5659kB/s)(5532KiB/1001msec); 0 zone resets 00:38:58.854 slat (nsec): min=8227, max=78313, avg=25598.39, stdev=11310.35 00:38:58.854 clat (usec): min=218, max=2008, avg=327.37, stdev=87.60 00:38:58.854 lat (usec): min=230, max=2027, avg=352.96, stdev=88.54 00:38:58.854 clat percentiles (usec): 00:38:58.854 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 277], 00:38:58.854 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 330], 00:38:58.854 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 429], 00:38:58.854 | 99.00th=[ 570], 99.50th=[ 676], 99.90th=[ 1532], 99.95th=[ 2008], 00:38:58.854 | 99.99th=[ 2008] 00:38:58.854 bw ( KiB/s): min= 5080, max= 5080, per=33.07%, avg=5080.00, stdev= 0.00, samples=1 00:38:58.854 iops : min= 1270, max= 1270, avg=1270.00, stdev= 0.00, samples=1 00:38:58.854 lat (usec) : 250=3.91%, 500=83.84%, 750=11.97%, 1000=0.17% 00:38:58.854 lat (msec) : 2=0.08%, 4=0.04% 00:38:58.854 cpu : usr=3.60%, sys=5.80%, ctx=2408, majf=0, minf=1 00:38:58.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.854 issued rwts: total=1024,1383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:58.854 00:38:58.854 Run status group 0 (all jobs): 00:38:58.854 READ: bw=10.1MiB/s (10.6MB/s), 83.9KiB/s-4092KiB/s (85.9kB/s-4190kB/s), io=10.1MiB (10.6MB), run=1001-1001msec 00:38:58.854 WRITE: bw=15.0MiB/s (15.7MB/s), 2046KiB/s-5526KiB/s (2095kB/s-5659kB/s), io=15.0MiB (15.7MB), run=1001-1001msec 00:38:58.854 00:38:58.854 Disk stats (read/write): 00:38:58.854 nvme0n1: ios=966/1024, merge=0/0, ticks=483/314, in_queue=797, util=86.97% 00:38:58.854 nvme0n2: ios=46/512, merge=0/0, ticks=733/109, in_queue=842, util=87.28% 00:38:58.854 nvme0n3: ios=301/512, merge=0/0, ticks=967/114, in_queue=1081, util=98.01% 00:38:58.854 nvme0n4: ios=960/1024, merge=0/0, ticks=433/338, in_queue=771, util=89.57% 00:38:58.854 11:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:58.854 [global] 00:38:58.854 thread=1 00:38:58.854 invalidate=1 00:38:58.854 rw=randwrite 00:38:58.854 time_based=1 00:38:58.854 runtime=1 00:38:58.854 ioengine=libaio 00:38:58.854 direct=1 00:38:58.854 bs=4096 00:38:58.854 iodepth=1 00:38:58.854 norandommap=0 00:38:58.854 numjobs=1 00:38:58.854 00:38:58.854 verify_dump=1 00:38:58.854 verify_backlog=512 00:38:58.854 verify_state_save=0 00:38:58.854 do_verify=1 00:38:58.854 verify=crc32c-intel 00:38:58.854 [job0] 00:38:58.854 filename=/dev/nvme0n1 00:38:58.854 [job1] 00:38:58.854 filename=/dev/nvme0n2 00:38:58.854 [job2] 00:38:58.854 filename=/dev/nvme0n3 00:38:58.854 [job3] 00:38:58.854 filename=/dev/nvme0n4 00:38:58.854 Could not set queue depth (nvme0n1) 00:38:58.854 Could not set queue depth (nvme0n2) 00:38:58.854 Could not set queue depth (nvme0n3) 00:38:58.854 Could not set queue depth (nvme0n4) 00:38:58.854 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.854 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.854 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.854 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:58.854 fio-3.35 00:38:58.854 Starting 4 threads 00:39:00.229 00:39:00.229 job0: (groupid=0, jobs=1): err= 0: pid=4026482: Sat Nov 2 11:51:00 2024 00:39:00.229 read: IOPS=25, BW=102KiB/s (105kB/s)(104KiB/1019msec) 00:39:00.229 slat (nsec): min=8418, max=42049, avg=18618.81, stdev=9867.77 00:39:00.229 clat (usec): min=444, max=41546, avg=33722.72, stdev=15370.90 00:39:00.229 lat (usec): min=453, max=41563, avg=33741.34, stdev=15375.23 00:39:00.229 clat percentiles (usec): 00:39:00.229 | 1.00th=[ 445], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[40633], 00:39:00.229 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:00.229 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:00.229 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:00.229 | 99.99th=[41681] 00:39:00.229 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:39:00.229 slat (nsec): min=8256, max=64163, avg=18251.52, stdev=6743.14 00:39:00.229 clat (usec): min=196, max=421, avg=251.86, stdev=33.88 00:39:00.229 lat (usec): min=215, max=454, avg=270.11, stdev=35.91 00:39:00.229 clat percentiles (usec): 00:39:00.229 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:39:00.229 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:39:00.229 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 285], 95.00th=[ 314], 00:39:00.229 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 420], 99.95th=[ 420], 00:39:00.229 | 99.99th=[ 420] 00:39:00.229 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:39:00.229 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:00.229 lat (usec) : 250=62.83%, 500=32.53%, 750=0.56% 00:39:00.229 lat (msec) : 20=0.19%, 50=3.90% 00:39:00.229 cpu : usr=0.29%, sys=1.67%, ctx=538, majf=0, minf=1 00:39:00.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:00.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.229 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:00.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:00.229 job1: (groupid=0, jobs=1): err= 0: pid=4026483: Sat Nov 2 11:51:00 2024 00:39:00.229 read: IOPS=513, BW=2056KiB/s (2105kB/s)(2072KiB/1008msec) 00:39:00.229 slat (nsec): min=5890, max=43552, avg=11307.45, stdev=6242.55 00:39:00.229 clat (usec): min=278, max=42026, avg=1411.98, stdev=6401.83 00:39:00.229 lat (usec): min=285, max=42042, avg=1423.29, stdev=6403.81 00:39:00.229 clat percentiles (usec): 00:39:00.229 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 347], 00:39:00.229 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:39:00.229 | 70.00th=[ 400], 80.00th=[ 408], 90.00th=[ 457], 95.00th=[ 537], 00:39:00.230 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:00.230 | 99.99th=[42206] 00:39:00.230 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:39:00.230 slat (nsec): min=6674, max=77198, avg=16553.34, stdev=9194.37 00:39:00.230 clat (usec): min=175, max=1214, avg=241.38, stdev=54.16 00:39:00.230 lat (usec): min=197, max=1230, avg=257.93, stdev=55.78 00:39:00.230 clat percentiles (usec): 00:39:00.230 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 212], 00:39:00.230 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:39:00.230 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 330], 00:39:00.230 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 873], 99.95th=[ 1221], 00:39:00.230 | 99.99th=[ 1221] 00:39:00.230 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=2 00:39:00.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:39:00.230 lat (usec) : 250=48.18%, 500=49.55%, 750=1.10%, 1000=0.19% 00:39:00.230 lat (msec) : 2=0.13%, 50=0.84% 00:39:00.230 cpu : usr=1.29%, sys=2.68%, ctx=1543, majf=0, minf=1 00:39:00.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:00.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.230 issued rwts: total=518,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:00.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:00.230 job2: (groupid=0, jobs=1): err= 0: pid=4026484: Sat Nov 2 11:51:00 2024 00:39:00.230 read: IOPS=506, BW=2025KiB/s (2074kB/s)(2076KiB/1025msec) 00:39:00.230 slat (nsec): min=5851, max=80788, avg=11354.21, stdev=7269.21 00:39:00.230 clat (usec): min=306, max=43119, avg=1328.00, stdev=6220.91 00:39:00.230 lat (usec): min=312, max=43134, avg=1339.35, stdev=6223.08 00:39:00.230 clat percentiles (usec): 00:39:00.230 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:39:00.230 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:39:00.230 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 478], 00:39:00.230 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:39:00.230 | 99.99th=[43254] 00:39:00.230 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:39:00.230 slat (nsec): min=7472, max=82195, avg=20654.18, stdev=10046.26 00:39:00.230 clat (usec): min=194, max=2184, avg=293.95, stdev=92.71 00:39:00.230 lat (usec): min=203, max=2194, avg=314.61, stdev=94.74 00:39:00.230 clat percentiles (usec): 00:39:00.230 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 247], 00:39:00.230 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 297], 00:39:00.230 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 388], 00:39:00.230 | 99.00th=[ 465], 99.50th=[ 537], 99.90th=[ 1631], 99.95th=[ 2180], 00:39:00.230 | 99.99th=[ 2180] 00:39:00.230 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=2 00:39:00.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:39:00.230 lat (usec) : 250=14.78%, 500=83.21%, 750=0.91%, 1000=0.06% 00:39:00.230 lat (msec) : 2=0.13%, 4=0.06%, 10=0.06%, 50=0.78% 00:39:00.230 cpu : usr=1.37%, sys=3.81%, ctx=1544, majf=0, minf=1 00:39:00.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:00.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.230 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:00.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:00.230 job3: (groupid=0, jobs=1): err= 0: pid=4026485: Sat Nov 2 11:51:00 2024 00:39:00.230 read: IOPS=478, BW=1915KiB/s (1961kB/s)(1940KiB/1013msec) 00:39:00.230 slat (nsec): min=5001, max=67188, avg=15708.02, stdev=7774.79 00:39:00.230 clat (usec): min=309, max=42091, avg=1740.90, stdev=7278.00 00:39:00.230 lat (usec): min=318, max=42108, avg=1756.60, stdev=7278.74 00:39:00.230 clat percentiles (usec): 00:39:00.230 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:39:00.230 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 371], 00:39:00.230 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 478], 95.00th=[ 578], 00:39:00.230 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:00.230 | 99.99th=[42206] 00:39:00.230 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:39:00.230 slat (nsec): min=8859, max=60017, avg=22261.33, stdev=8661.85 00:39:00.230 clat (usec): min=207, max=480, avg=280.69, stdev=38.09 00:39:00.230 lat (usec): min=218, max=519, avg=302.95, stdev=40.72 00:39:00.230 clat percentiles (usec): 00:39:00.230 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 251], 00:39:00.230 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:39:00.230 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 367], 00:39:00.230 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 482], 99.95th=[ 482], 00:39:00.230 | 99.99th=[ 482] 00:39:00.230 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:39:00.230 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:00.230 lat (usec) : 250=9.83%, 500=85.76%, 750=2.41%, 1000=0.10% 00:39:00.230 lat (msec) : 2=0.10%, 4=0.10%, 10=0.10%, 50=1.60% 00:39:00.230 cpu : usr=1.48%, sys=2.27%, ctx=998, majf=0, minf=1 00:39:00.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:00.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:00.230 issued rwts: total=485,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:00.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:00.230 00:39:00.230 Run status group 0 (all jobs): 00:39:00.230 READ: bw=6041KiB/s (6186kB/s), 102KiB/s-2056KiB/s (105kB/s-2105kB/s), io=6192KiB (6341kB), run=1008-1025msec 00:39:00.230 WRITE: bw=11.7MiB/s (12.3MB/s), 2010KiB/s-4063KiB/s (2058kB/s-4161kB/s), io=12.0MiB (12.6MB), run=1008-1025msec 00:39:00.230 00:39:00.230 Disk stats (read/write): 00:39:00.230 nvme0n1: ios=35/512, merge=0/0, ticks=727/127, in_queue=854, util=85.97% 00:39:00.230 nvme0n2: ios=554/1024, merge=0/0, ticks=1687/240, in_queue=1927, util=97.97% 00:39:00.230 nvme0n3: ios=541/1024, merge=0/0, ticks=1461/290, in_queue=1751, util=98.44% 00:39:00.230 nvme0n4: ios=539/512, merge=0/0, ticks=1285/139, in_queue=1424, util=98.43% 00:39:00.230 11:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:00.230 [global] 00:39:00.230 thread=1 00:39:00.230 invalidate=1 00:39:00.230 rw=write 00:39:00.230 time_based=1 00:39:00.230 runtime=1 00:39:00.230 ioengine=libaio 00:39:00.230 direct=1 00:39:00.230 bs=4096 00:39:00.230 iodepth=128 00:39:00.230 norandommap=0 00:39:00.230 numjobs=1 00:39:00.230 00:39:00.230 verify_dump=1 00:39:00.230 verify_backlog=512 00:39:00.230 verify_state_save=0 00:39:00.230 do_verify=1 00:39:00.230 verify=crc32c-intel 00:39:00.230 [job0] 00:39:00.230 filename=/dev/nvme0n1 00:39:00.230 [job1] 00:39:00.230 filename=/dev/nvme0n2 00:39:00.230 [job2] 00:39:00.230 filename=/dev/nvme0n3 00:39:00.230 [job3] 00:39:00.230 filename=/dev/nvme0n4 00:39:00.230 Could not set queue depth (nvme0n1) 00:39:00.230 Could not set queue depth (nvme0n2) 00:39:00.230 Could not set queue depth (nvme0n3) 00:39:00.230 Could not set queue depth (nvme0n4) 00:39:00.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:00.488 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:00.488 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:00.488 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:00.488 fio-3.35 00:39:00.488 Starting 4 threads 00:39:01.863 00:39:01.863 job0: (groupid=0, jobs=1): err= 0: pid=4026757: Sat Nov 2 11:51:01 2024 00:39:01.863 read: IOPS=3809, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec) 00:39:01.863 slat (usec): min=3, max=15703, avg=116.88, stdev=586.29 00:39:01.863 clat (usec): min=1195, max=55146, avg=14492.50, stdev=5928.24 00:39:01.863 lat (usec): min=6589, max=55165, avg=14609.38, stdev=5952.93 00:39:01.863 clat percentiles (usec): 00:39:01.863 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10421], 00:39:01.863 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12518], 60.00th=[13566], 00:39:01.863 | 70.00th=[16909], 80.00th=[17695], 90.00th=[21103], 95.00th=[27919], 00:39:01.863 | 99.00th=[32900], 99.50th=[35914], 99.90th=[55313], 99.95th=[55313], 00:39:01.863 | 99.99th=[55313] 00:39:01.863 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:39:01.863 slat (usec): min=3, max=8425, avg=128.85, stdev=600.46 00:39:01.863 clat (usec): min=4947, max=49649, avg=17520.72, stdev=10280.39 00:39:01.863 lat (usec): min=4955, max=49662, avg=17649.57, stdev=10346.72 00:39:01.863 clat percentiles (usec): 00:39:01.863 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[11469], 00:39:01.863 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12911], 60.00th=[13566], 00:39:01.863 | 70.00th=[14353], 80.00th=[24249], 90.00th=[36963], 95.00th=[43254], 00:39:01.863 | 99.00th=[46400], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:39:01.863 | 99.99th=[49546] 00:39:01.863 bw ( KiB/s): min=11456, max=21312, per=27.27%, avg=16384.00, stdev=6969.24, samples=2 00:39:01.863 iops : min= 2864, max= 5328, avg=4096.00, stdev=1742.31, samples=2 00:39:01.863 lat (msec) : 2=0.01%, 10=13.48%, 20=67.47%, 50=18.82%, 100=0.21% 00:39:01.863 cpu : usr=3.78%, sys=5.47%, ctx=455, majf=0, minf=1 00:39:01.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:01.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:01.863 issued rwts: total=3832,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:01.863 job1: (groupid=0, jobs=1): err= 0: pid=4026758: Sat Nov 2 11:51:01 2024 00:39:01.863 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:39:01.863 slat (usec): min=2, max=13038, avg=116.66, stdev=686.03 00:39:01.863 clat (usec): min=2484, max=53645, avg=15279.41, stdev=9254.24 00:39:01.863 lat (usec): min=2489, max=53657, avg=15396.06, stdev=9311.88 00:39:01.863 clat percentiles (usec): 00:39:01.863 | 1.00th=[ 3752], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:39:01.863 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:39:01.863 | 70.00th=[12649], 80.00th=[13566], 90.00th=[32900], 95.00th=[41157], 00:39:01.863 | 99.00th=[47449], 99.50th=[50594], 99.90th=[51119], 99.95th=[53740], 00:39:01.863 | 99.99th=[53740] 00:39:01.863 write: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec); 0 zone resets 00:39:01.863 slat (usec): min=3, max=19339, avg=110.87, stdev=699.44 00:39:01.863 clat (usec): min=484, max=66273, avg=14432.01, stdev=9162.52 00:39:01.863 lat (usec): min=3157, max=66282, avg=14542.88, stdev=9219.62 00:39:01.863 clat percentiles (usec): 00:39:01.863 | 1.00th=[ 3687], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10945], 00:39:01.863 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:39:01.863 | 70.00th=[12125], 80.00th=[12911], 90.00th=[22938], 95.00th=[38011], 00:39:01.863 | 99.00th=[52691], 99.50th=[62129], 99.90th=[66323], 99.95th=[66323], 00:39:01.863 | 99.99th=[66323] 00:39:01.863 bw ( KiB/s): min=12408, max=21584, per=28.29%, avg=16996.00, stdev=6488.41, samples=2 00:39:01.863 iops : min= 3102, max= 5396, avg=4249.00, stdev=1622.10, samples=2 00:39:01.863 lat (usec) : 500=0.01% 00:39:01.863 lat (msec) : 4=1.40%, 10=7.46%, 20=77.47%, 50=12.62%, 100=1.04% 00:39:01.863 cpu : usr=4.20%, sys=4.90%, ctx=477, majf=0, minf=1 00:39:01.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:01.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:01.863 issued rwts: total=4096,4377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:01.863 job2: (groupid=0, jobs=1): err= 0: pid=4026759: Sat Nov 2 11:51:01 2024 00:39:01.863 read: IOPS=3990, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1005msec) 00:39:01.863 slat (usec): min=3, max=16715, avg=121.97, stdev=784.25 00:39:01.863 clat (usec): min=2248, max=35646, avg=15138.02, stdev=4548.81 00:39:01.863 lat (usec): min=5295, max=35661, avg=15259.99, stdev=4596.77 00:39:01.863 clat percentiles (usec): 00:39:01.863 | 1.00th=[ 6783], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12256], 00:39:01.863 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13435], 60.00th=[15139], 00:39:01.863 | 70.00th=[16188], 80.00th=[19006], 90.00th=[21103], 95.00th=[22938], 00:39:01.863 | 99.00th=[31851], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:39:01.863 | 99.99th=[35390] 00:39:01.863 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:39:01.863 slat (usec): min=4, max=12535, avg=115.94, stdev=592.94 00:39:01.863 clat (usec): min=1232, max=52571, avg=16241.88, stdev=6893.79 00:39:01.863 lat (usec): min=1258, max=53557, avg=16357.82, stdev=6946.75 00:39:01.863 clat percentiles (usec): 00:39:01.864 | 1.00th=[ 7635], 5.00th=[10028], 10.00th=[11994], 20.00th=[12780], 00:39:01.864 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:39:01.864 | 70.00th=[15270], 80.00th=[19792], 90.00th=[26084], 95.00th=[28705], 00:39:01.864 | 99.00th=[47449], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:39:01.864 | 99.99th=[52691] 00:39:01.864 bw ( KiB/s): min=14576, max=18192, per=27.27%, avg=16384.00, stdev=2556.90, samples=2 00:39:01.864 iops : min= 3644, max= 4548, avg=4096.00, stdev=639.22, samples=2 00:39:01.864 lat (msec) : 2=0.01%, 4=0.01%, 10=5.69%, 20=76.99%, 50=17.15% 00:39:01.864 lat (msec) : 100=0.15% 00:39:01.864 cpu : usr=3.98%, sys=5.98%, ctx=563, majf=0, minf=2 00:39:01.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:01.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:01.864 issued rwts: total=4010,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:01.864 job3: (groupid=0, jobs=1): err= 0: pid=4026760: Sat Nov 2 11:51:01 2024 00:39:01.864 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:39:01.864 slat (usec): min=3, max=12335, avg=179.52, stdev=1009.52 00:39:01.864 clat (usec): min=12190, max=42985, avg=22014.91, stdev=6490.28 00:39:01.864 lat (usec): min=12199, max=43032, avg=22194.43, stdev=6577.11 00:39:01.864 clat percentiles (usec): 00:39:01.864 | 1.00th=[15664], 5.00th=[16581], 10.00th=[16581], 20.00th=[16712], 00:39:01.864 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18744], 60.00th=[21627], 00:39:01.864 | 70.00th=[23200], 80.00th=[28443], 90.00th=[31589], 95.00th=[35390], 00:39:01.864 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:39:01.864 | 99.99th=[42730] 00:39:01.864 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(9.93MiB/1006msec); 0 zone resets 00:39:01.864 slat (usec): min=4, max=26087, avg=243.22, stdev=1291.51 00:39:01.864 clat (usec): min=459, max=87250, avg=31436.21, stdev=13245.04 00:39:01.864 lat (usec): min=6102, max=87272, avg=31679.43, stdev=13348.76 00:39:01.864 clat percentiles (usec): 00:39:01.864 | 1.00th=[ 6390], 5.00th=[16909], 10.00th=[20055], 20.00th=[21103], 00:39:01.864 | 30.00th=[24249], 40.00th=[24773], 50.00th=[27919], 60.00th=[31327], 00:39:01.864 | 70.00th=[34341], 80.00th=[36963], 90.00th=[53216], 95.00th=[62653], 00:39:01.864 | 99.00th=[68682], 99.50th=[68682], 99.90th=[82314], 99.95th=[83362], 00:39:01.864 | 99.99th=[87557] 00:39:01.864 bw ( KiB/s): min= 8208, max=11128, per=16.09%, avg=9668.00, stdev=2064.75, samples=2 00:39:01.864 iops : min= 2052, max= 2782, avg=2417.00, stdev=516.19, samples=2 00:39:01.864 lat (usec) : 500=0.02% 00:39:01.864 lat (msec) : 10=0.91%, 20=30.12%, 50=62.86%, 100=6.08% 00:39:01.864 cpu : usr=2.49%, sys=3.28%, ctx=317, majf=0, minf=1 00:39:01.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:39:01.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:01.864 issued rwts: total=2048,2543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:01.864 00:39:01.864 Run status group 0 (all jobs): 00:39:01.864 READ: bw=54.3MiB/s (56.9MB/s), 8143KiB/s-16.0MiB/s (8339kB/s-16.7MB/s), io=54.6MiB (57.3MB), run=1002-1006msec 00:39:01.864 WRITE: bw=58.7MiB/s (61.5MB/s), 9.87MiB/s-17.1MiB/s (10.4MB/s-17.9MB/s), io=59.0MiB (61.9MB), run=1002-1006msec 00:39:01.864 00:39:01.864 Disk stats (read/write): 00:39:01.864 nvme0n1: ios=3634/3791, merge=0/0, ticks=15163/16932, in_queue=32095, util=86.77% 00:39:01.864 nvme0n2: ios=3286/3584, merge=0/0, ticks=16538/17180, in_queue=33718, util=97.26% 00:39:01.864 nvme0n3: ios=3096/3567, merge=0/0, ticks=27882/29983, in_queue=57865, util=95.30% 00:39:01.864 nvme0n4: ios=1593/1991, merge=0/0, ticks=12064/22909, in_queue=34973, util=97.05% 00:39:01.864 11:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:01.864 [global] 00:39:01.864 thread=1 00:39:01.864 invalidate=1 00:39:01.864 rw=randwrite 00:39:01.864 time_based=1 00:39:01.864 runtime=1 00:39:01.864 ioengine=libaio 00:39:01.864 direct=1 00:39:01.864 bs=4096 00:39:01.864 iodepth=128 00:39:01.864 norandommap=0 00:39:01.864 numjobs=1 00:39:01.864 00:39:01.864 verify_dump=1 00:39:01.864 verify_backlog=512 00:39:01.864 verify_state_save=0 00:39:01.864 do_verify=1 00:39:01.864 verify=crc32c-intel 00:39:01.864 [job0] 00:39:01.864 filename=/dev/nvme0n1 00:39:01.864 [job1] 00:39:01.864 filename=/dev/nvme0n2 00:39:01.864 [job2] 00:39:01.864 filename=/dev/nvme0n3 00:39:01.864 [job3] 00:39:01.864 filename=/dev/nvme0n4 00:39:01.864 Could not set queue depth (nvme0n1) 00:39:01.864 Could not set queue depth (nvme0n2) 00:39:01.864 Could not set queue depth (nvme0n3) 00:39:01.864 Could not set queue depth (nvme0n4) 00:39:01.864 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.864 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.864 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.864 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:01.864 fio-3.35 00:39:01.864 Starting 4 threads 00:39:03.240 00:39:03.240 job0: (groupid=0, jobs=1): err= 0: pid=4027172: Sat Nov 2 11:51:03 2024 00:39:03.240 read: IOPS=2000, BW=8000KiB/s (8192kB/s)(8360KiB/1045msec) 00:39:03.240 slat (usec): min=3, max=9733, avg=187.90, stdev=998.52 00:39:03.240 clat (usec): min=12991, max=55073, avg=23712.42, stdev=6779.90 00:39:03.241 lat (usec): min=13003, max=55083, avg=23900.33, stdev=6872.47 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[14353], 5.00th=[15139], 10.00th=[15270], 20.00th=[16712], 00:39:03.241 | 30.00th=[21627], 40.00th=[22938], 50.00th=[23200], 60.00th=[24249], 00:39:03.241 | 70.00th=[26084], 80.00th=[27395], 90.00th=[31065], 95.00th=[36439], 00:39:03.241 | 99.00th=[50070], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:39:03.241 | 99.99th=[55313] 00:39:03.241 write: IOPS=2449, BW=9799KiB/s (10.0MB/s)(10.0MiB/1045msec); 0 zone resets 00:39:03.241 slat (usec): min=3, max=11689, avg=227.08, stdev=955.43 00:39:03.241 clat (usec): min=6358, max=71702, avg=32418.28, stdev=12468.79 00:39:03.241 lat (usec): min=6371, max=71726, avg=32645.36, stdev=12530.38 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[ 6390], 5.00th=[13042], 10.00th=[20055], 20.00th=[22938], 00:39:03.241 | 30.00th=[24249], 40.00th=[27132], 50.00th=[30540], 60.00th=[34341], 00:39:03.241 | 70.00th=[38011], 80.00th=[43254], 90.00th=[44827], 95.00th=[58459], 00:39:03.241 | 99.00th=[66847], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:39:03.241 | 99.99th=[71828] 00:39:03.241 bw ( KiB/s): min= 9664, max=10128, per=17.03%, avg=9896.00, stdev=328.10, samples=2 00:39:03.241 iops : min= 2416, max= 2532, avg=2474.00, stdev=82.02, samples=2 00:39:03.241 lat (msec) : 10=0.88%, 20=16.39%, 50=77.63%, 100=5.10% 00:39:03.241 cpu : usr=3.45%, sys=5.36%, ctx=297, majf=0, minf=1 00:39:03.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:39:03.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:03.241 issued rwts: total=2090,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:03.241 job1: (groupid=0, jobs=1): err= 0: pid=4027173: Sat Nov 2 11:51:03 2024 00:39:03.241 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:39:03.241 slat (usec): min=2, max=13364, avg=95.17, stdev=517.53 00:39:03.241 clat (usec): min=5736, max=63458, avg=12660.36, stdev=3721.42 00:39:03.241 lat (usec): min=5740, max=63465, avg=12755.53, stdev=3734.19 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:39:03.241 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:39:03.241 | 70.00th=[13042], 80.00th=[14484], 90.00th=[17171], 95.00th=[18220], 00:39:03.241 | 99.00th=[23987], 99.50th=[31065], 99.90th=[59507], 99.95th=[60031], 00:39:03.241 | 99.99th=[63701] 00:39:03.241 write: IOPS=5180, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1004msec); 0 zone resets 00:39:03.241 slat (usec): min=3, max=17778, avg=89.09, stdev=562.88 00:39:03.241 clat (usec): min=1341, max=35579, avg=12033.95, stdev=4602.44 00:39:03.241 lat (usec): min=1348, max=35601, avg=12123.05, stdev=4614.97 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[ 3294], 5.00th=[ 6718], 10.00th=[ 8586], 20.00th=[ 9896], 00:39:03.241 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:39:03.241 | 70.00th=[12125], 80.00th=[12911], 90.00th=[15795], 95.00th=[19268], 00:39:03.241 | 99.00th=[33162], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:39:03.241 | 99.99th=[35390] 00:39:03.241 bw ( KiB/s): min=20480, max=20480, per=35.25%, avg=20480.00, stdev= 0.00, samples=2 00:39:03.241 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:39:03.241 lat (msec) : 2=0.18%, 4=0.86%, 10=16.76%, 20=78.06%, 50=4.05% 00:39:03.241 lat (msec) : 100=0.08% 00:39:03.241 cpu : usr=6.58%, sys=8.47%, ctx=518, majf=0, minf=1 00:39:03.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:03.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:03.241 issued rwts: total=5120,5201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:03.241 job2: (groupid=0, jobs=1): err= 0: pid=4027174: Sat Nov 2 11:51:03 2024 00:39:03.241 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:39:03.241 slat (usec): min=2, max=15531, avg=125.74, stdev=762.74 00:39:03.241 clat (usec): min=3587, max=56036, avg=16071.74, stdev=6458.16 00:39:03.241 lat (usec): min=3599, max=59062, avg=16197.48, stdev=6488.91 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[ 7767], 5.00th=[10028], 10.00th=[11338], 20.00th=[12256], 00:39:03.241 | 30.00th=[12780], 40.00th=[13304], 50.00th=[14091], 60.00th=[14353], 00:39:03.241 | 70.00th=[15664], 80.00th=[18482], 90.00th=[26346], 95.00th=[31327], 00:39:03.241 | 99.00th=[39060], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:39:03.241 | 99.99th=[55837] 00:39:03.241 write: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1009msec); 0 zone resets 00:39:03.241 slat (usec): min=3, max=13058, avg=100.98, stdev=611.27 00:39:03.241 clat (usec): min=1491, max=38891, avg=13993.37, stdev=5446.16 00:39:03.241 lat (usec): min=1512, max=38895, avg=14094.36, stdev=5469.75 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[ 4490], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[11207], 00:39:03.241 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13173], 60.00th=[13435], 00:39:03.241 | 70.00th=[13960], 80.00th=[14877], 90.00th=[17957], 95.00th=[22676], 00:39:03.241 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:39:03.241 | 99.99th=[39060] 00:39:03.241 bw ( KiB/s): min=16384, max=17928, per=29.53%, avg=17156.00, stdev=1091.77, samples=2 00:39:03.241 iops : min= 4096, max= 4482, avg=4289.00, stdev=272.94, samples=2 00:39:03.241 lat (msec) : 2=0.06%, 4=0.22%, 10=7.47%, 20=80.72%, 50=11.50% 00:39:03.241 lat (msec) : 100=0.02% 00:39:03.241 cpu : usr=4.56%, sys=8.73%, ctx=438, majf=0, minf=1 00:39:03.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:03.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:03.241 issued rwts: total=4096,4417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:03.241 job3: (groupid=0, jobs=1): err= 0: pid=4027175: Sat Nov 2 11:51:03 2024 00:39:03.241 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:39:03.241 slat (usec): min=3, max=13103, avg=168.55, stdev=914.32 00:39:03.241 clat (usec): min=14306, max=35859, avg=21576.28, stdev=4632.35 00:39:03.241 lat (usec): min=14315, max=35865, avg=21744.83, stdev=4715.02 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[15401], 5.00th=[15926], 10.00th=[16188], 20.00th=[16450], 00:39:03.241 | 30.00th=[17171], 40.00th=[19006], 50.00th=[22152], 60.00th=[23462], 00:39:03.241 | 70.00th=[23987], 80.00th=[25822], 90.00th=[27395], 95.00th=[29492], 00:39:03.241 | 99.00th=[32900], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:39:03.241 | 99.99th=[35914] 00:39:03.241 write: IOPS=2984, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1006msec); 0 zone resets 00:39:03.241 slat (usec): min=4, max=15665, avg=179.38, stdev=883.55 00:39:03.241 clat (usec): min=4572, max=44424, avg=23856.76, stdev=6556.02 00:39:03.241 lat (usec): min=7927, max=44445, avg=24036.15, stdev=6624.60 00:39:03.241 clat percentiles (usec): 00:39:03.241 | 1.00th=[12911], 5.00th=[15664], 10.00th=[16581], 20.00th=[16909], 00:39:03.241 | 30.00th=[20841], 40.00th=[21890], 50.00th=[23462], 60.00th=[24249], 00:39:03.241 | 70.00th=[26084], 80.00th=[29230], 90.00th=[33424], 95.00th=[36439], 00:39:03.241 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[42730], 00:39:03.241 | 99.99th=[44303] 00:39:03.241 bw ( KiB/s): min=10704, max=12288, per=19.78%, avg=11496.00, stdev=1120.06, samples=2 00:39:03.241 iops : min= 2676, max= 3072, avg=2874.00, stdev=280.01, samples=2 00:39:03.241 lat (msec) : 10=0.31%, 20=34.95%, 50=64.74% 00:39:03.241 cpu : usr=4.28%, sys=5.77%, ctx=273, majf=0, minf=2 00:39:03.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:03.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:03.241 issued rwts: total=2560,3002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:03.241 00:39:03.241 Run status group 0 (all jobs): 00:39:03.241 READ: bw=51.8MiB/s (54.3MB/s), 8000KiB/s-19.9MiB/s (8192kB/s-20.9MB/s), io=54.2MiB (56.8MB), run=1004-1045msec 00:39:03.241 WRITE: bw=56.7MiB/s (59.5MB/s), 9799KiB/s-20.2MiB/s (10.0MB/s-21.2MB/s), io=59.3MiB (62.2MB), run=1004-1045msec 00:39:03.241 00:39:03.241 Disk stats (read/write): 00:39:03.241 nvme0n1: ios=2007/2048, merge=0/0, ticks=14859/21372, in_queue=36231, util=97.90% 00:39:03.241 nvme0n2: ios=4389/4608, merge=0/0, ticks=17006/19515, in_queue=36521, util=87.40% 00:39:03.241 nvme0n3: ios=3339/3584, merge=0/0, ticks=16824/14875, in_queue=31699, util=88.92% 00:39:03.241 nvme0n4: ios=2259/2560, merge=0/0, ticks=15978/18098, in_queue=34076, util=90.62% 00:39:03.241 11:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:03.241 11:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4027307 00:39:03.241 11:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:03.241 11:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:03.241 [global] 00:39:03.241 thread=1 00:39:03.241 invalidate=1 00:39:03.241 rw=read 00:39:03.241 time_based=1 00:39:03.241 runtime=10 00:39:03.241 ioengine=libaio 00:39:03.241 direct=1 00:39:03.241 bs=4096 00:39:03.241 iodepth=1 00:39:03.241 norandommap=1 00:39:03.241 numjobs=1 00:39:03.241 00:39:03.241 [job0] 00:39:03.241 filename=/dev/nvme0n1 00:39:03.241 [job1] 00:39:03.242 filename=/dev/nvme0n2 00:39:03.242 [job2] 00:39:03.242 filename=/dev/nvme0n3 00:39:03.242 [job3] 00:39:03.242 filename=/dev/nvme0n4 00:39:03.242 Could not set queue depth (nvme0n1) 00:39:03.242 Could not set queue depth (nvme0n2) 00:39:03.242 Could not set queue depth (nvme0n3) 00:39:03.242 Could not set queue depth (nvme0n4) 00:39:03.242 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.242 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.242 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.242 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.242 fio-3.35 00:39:03.242 Starting 4 threads 00:39:06.520 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:06.520 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:06.520 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=438272, buflen=4096 00:39:06.520 fio: pid=4027407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:06.778 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:06.778 11:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:06.778 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37429248, buflen=4096 00:39:06.778 fio: pid=4027406, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:07.036 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=36765696, buflen=4096 00:39:07.036 fio: pid=4027402, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:07.036 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:07.036 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:07.295 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:07.295 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:07.295 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=434176, buflen=4096 00:39:07.295 fio: pid=4027403, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:07.295 00:39:07.295 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4027402: Sat Nov 2 11:51:07 2024 00:39:07.295 read: IOPS=2545, BW=9.94MiB/s (10.4MB/s)(35.1MiB/3527msec) 00:39:07.295 slat (usec): min=4, max=17465, avg=20.13, stdev=308.32 00:39:07.295 clat (usec): min=250, max=1032, avg=366.43, stdev=54.55 00:39:07.295 lat (usec): min=258, max=18031, avg=386.56, stdev=317.92 00:39:07.295 clat percentiles (usec): 00:39:07.295 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:39:07.295 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 379], 00:39:07.295 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 474], 00:39:07.295 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 750], 99.95th=[ 816], 00:39:07.295 | 99.99th=[ 1037] 00:39:07.295 bw ( KiB/s): min= 9320, max=10936, per=54.34%, avg=10334.67, stdev=568.81, samples=6 00:39:07.295 iops : min= 2330, max= 2734, avg=2583.67, stdev=142.20, samples=6 00:39:07.296 lat (usec) : 500=97.63%, 750=2.27%, 1000=0.08% 00:39:07.296 lat (msec) : 2=0.01% 00:39:07.296 cpu : usr=1.45%, sys=4.00%, ctx=8982, majf=0, minf=2 00:39:07.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 issued rwts: total=8977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:07.296 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4027403: Sat Nov 2 11:51:07 2024 00:39:07.296 read: IOPS=27, BW=110KiB/s (113kB/s)(424KiB/3855msec) 00:39:07.296 slat (usec): min=9, max=20768, avg=442.46, stdev=2747.47 00:39:07.296 clat (usec): min=513, max=41530, avg=35698.78, stdev=13754.94 00:39:07.296 lat (usec): min=528, max=61869, avg=36145.11, stdev=14195.40 00:39:07.296 clat percentiles (usec): 00:39:07.296 | 1.00th=[ 537], 5.00th=[ 570], 10.00th=[ 603], 20.00th=[41157], 00:39:07.296 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:07.296 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:07.296 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:07.296 | 99.99th=[41681] 00:39:07.296 bw ( KiB/s): min= 93, max= 136, per=0.58%, avg=111.57, stdev=15.19, samples=7 00:39:07.296 iops : min= 23, max= 34, avg=27.86, stdev= 3.85, samples=7 00:39:07.296 lat (usec) : 750=12.15%, 1000=0.93% 00:39:07.296 lat (msec) : 50=85.98% 00:39:07.296 cpu : usr=0.00%, sys=0.10%, ctx=111, majf=0, minf=2 00:39:07.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:07.296 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4027406: Sat Nov 2 11:51:07 2024 00:39:07.296 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(35.7MiB/3217msec) 00:39:07.296 slat (nsec): min=4365, max=65234, avg=9063.56, stdev=4592.84 00:39:07.296 clat (usec): min=278, max=1358, avg=338.03, stdev=48.39 00:39:07.296 lat (usec): min=284, max=1364, avg=347.09, stdev=50.12 00:39:07.296 clat percentiles (usec): 00:39:07.296 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:39:07.296 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:39:07.296 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 465], 00:39:07.296 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 594], 99.95th=[ 668], 00:39:07.296 | 99.99th=[ 1352] 00:39:07.296 bw ( KiB/s): min= 9192, max=12376, per=59.93%, avg=11397.33, stdev=1141.36, samples=6 00:39:07.296 iops : min= 2298, max= 3094, avg=2849.33, stdev=285.34, samples=6 00:39:07.296 lat (usec) : 500=98.44%, 750=1.51%, 1000=0.03% 00:39:07.296 lat (msec) : 2=0.01% 00:39:07.296 cpu : usr=1.55%, sys=3.67%, ctx=9139, majf=0, minf=1 00:39:07.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 issued rwts: total=9139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:07.296 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4027407: Sat Nov 2 11:51:07 2024 00:39:07.296 read: IOPS=36, BW=145KiB/s (148kB/s)(428KiB/2960msec) 00:39:07.296 slat (nsec): min=5390, max=51820, avg=15156.85, stdev=7822.57 00:39:07.296 clat (usec): min=336, max=41996, avg=27379.45, stdev=19264.90 00:39:07.296 lat (usec): min=342, max=42017, avg=27394.62, stdev=19267.33 00:39:07.296 clat percentiles (usec): 00:39:07.296 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 469], 00:39:07.296 | 30.00th=[ 619], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:07.296 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:07.296 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:07.296 | 99.99th=[42206] 00:39:07.296 bw ( KiB/s): min= 96, max= 304, per=0.80%, avg=153.60, stdev=85.12, samples=5 00:39:07.296 iops : min= 24, max= 76, avg=38.40, stdev=21.28, samples=5 00:39:07.296 lat (usec) : 500=24.07%, 750=9.26% 00:39:07.296 lat (msec) : 50=65.74% 00:39:07.296 cpu : usr=0.00%, sys=0.10%, ctx=112, majf=0, minf=1 00:39:07.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.296 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:07.296 00:39:07.296 Run status group 0 (all jobs): 00:39:07.296 READ: bw=18.6MiB/s (19.5MB/s), 110KiB/s-11.1MiB/s (113kB/s-11.6MB/s), io=71.6MiB (75.1MB), run=2960-3855msec 00:39:07.296 00:39:07.296 Disk stats (read/write): 00:39:07.296 nvme0n1: ios=8696/0, merge=0/0, ticks=3923/0, in_queue=3923, util=98.80% 00:39:07.296 nvme0n2: ios=107/0, merge=0/0, ticks=3795/0, in_queue=3795, util=95.43% 00:39:07.296 nvme0n3: ios=8849/0, merge=0/0, ticks=2880/0, in_queue=2880, util=96.79% 00:39:07.296 nvme0n4: ios=153/0, merge=0/0, ticks=3504/0, in_queue=3504, util=99.83% 00:39:07.554 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:07.554 11:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:07.813 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:07.813 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:08.379 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:08.379 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:08.637 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:08.637 11:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4027307 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:08.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:39:08.895 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:08.896 nvmf hotplug test: fio failed as expected 00:39:08.896 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.155 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.155 rmmod nvme_tcp 00:39:09.155 rmmod nvme_fabrics 00:39:09.415 rmmod nvme_keyring 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4025183 ']' 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4025183 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 4025183 ']' 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 4025183 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4025183 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4025183' 00:39:09.415 killing process with pid 4025183 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 4025183 00:39:09.415 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 4025183 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:09.673 11:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:11.580 00:39:11.580 real 0m23.700s 00:39:11.580 user 1m7.241s 00:39:11.580 sys 0m10.226s 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:11.580 ************************************ 00:39:11.580 END TEST nvmf_fio_target 00:39:11.580 ************************************ 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:11.580 ************************************ 00:39:11.580 START TEST nvmf_bdevio 00:39:11.580 ************************************ 00:39:11.580 11:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:11.839 * Looking for test storage... 00:39:11.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:11.839 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.840 --rc genhtml_branch_coverage=1 00:39:11.840 --rc genhtml_function_coverage=1 00:39:11.840 --rc genhtml_legend=1 00:39:11.840 --rc geninfo_all_blocks=1 00:39:11.840 --rc geninfo_unexecuted_blocks=1 00:39:11.840 00:39:11.840 ' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.840 --rc genhtml_branch_coverage=1 00:39:11.840 --rc genhtml_function_coverage=1 00:39:11.840 --rc genhtml_legend=1 00:39:11.840 --rc geninfo_all_blocks=1 00:39:11.840 --rc geninfo_unexecuted_blocks=1 00:39:11.840 00:39:11.840 ' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.840 --rc genhtml_branch_coverage=1 00:39:11.840 --rc genhtml_function_coverage=1 00:39:11.840 --rc genhtml_legend=1 00:39:11.840 --rc geninfo_all_blocks=1 00:39:11.840 --rc geninfo_unexecuted_blocks=1 00:39:11.840 00:39:11.840 ' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:11.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.840 --rc genhtml_branch_coverage=1 00:39:11.840 --rc genhtml_function_coverage=1 00:39:11.840 --rc genhtml_legend=1 00:39:11.840 --rc geninfo_all_blocks=1 00:39:11.840 --rc geninfo_unexecuted_blocks=1 00:39:11.840 00:39:11.840 ' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:11.840 11:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:13.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:13.741 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:13.741 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:13.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:13.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:13.742 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:14.000 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:14.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:14.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:39:14.001 00:39:14.001 --- 10.0.0.2 ping statistics --- 00:39:14.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:14.001 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:14.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:14.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:39:14.001 00:39:14.001 --- 10.0.0.1 ping statistics --- 00:39:14.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:14.001 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4030536 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4030536 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 4030536 ']' 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:14.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:14.001 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.001 [2024-11-02 11:51:14.295804] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:14.001 [2024-11-02 11:51:14.296870] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:39:14.001 [2024-11-02 11:51:14.296931] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:14.001 [2024-11-02 11:51:14.375982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:14.260 [2024-11-02 11:51:14.425544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:14.260 [2024-11-02 11:51:14.425609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:14.260 [2024-11-02 11:51:14.425625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:14.260 [2024-11-02 11:51:14.425638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:14.260 [2024-11-02 11:51:14.425650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:14.260 [2024-11-02 11:51:14.427334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:14.260 [2024-11-02 11:51:14.427392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:14.260 [2024-11-02 11:51:14.427449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:14.260 [2024-11-02 11:51:14.427452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:14.260 [2024-11-02 11:51:14.518790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:14.260 [2024-11-02 11:51:14.519016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:14.260 [2024-11-02 11:51:14.519320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:14.260 [2024-11-02 11:51:14.519947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:14.260 [2024-11-02 11:51:14.520211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.260 [2024-11-02 11:51:14.576174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.260 Malloc0 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:14.260 [2024-11-02 11:51:14.636386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:14.260 { 00:39:14.260 "params": { 00:39:14.260 "name": "Nvme$subsystem", 00:39:14.260 "trtype": "$TEST_TRANSPORT", 00:39:14.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.260 "adrfam": "ipv4", 00:39:14.260 "trsvcid": "$NVMF_PORT", 00:39:14.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.260 "hdgst": ${hdgst:-false}, 00:39:14.260 "ddgst": ${ddgst:-false} 00:39:14.260 }, 00:39:14.260 "method": "bdev_nvme_attach_controller" 00:39:14.260 } 00:39:14.260 EOF 00:39:14.260 )") 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:14.260 11:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:14.260 "params": { 00:39:14.260 "name": "Nvme1", 00:39:14.260 "trtype": "tcp", 00:39:14.260 "traddr": "10.0.0.2", 00:39:14.260 "adrfam": "ipv4", 00:39:14.260 "trsvcid": "4420", 00:39:14.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:14.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:14.260 "hdgst": false, 00:39:14.260 "ddgst": false 00:39:14.260 }, 00:39:14.260 "method": "bdev_nvme_attach_controller" 00:39:14.260 }' 00:39:14.520 [2024-11-02 11:51:14.690060] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:39:14.520 [2024-11-02 11:51:14.690140] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030633 ] 00:39:14.520 [2024-11-02 11:51:14.762604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:14.520 [2024-11-02 11:51:14.814913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:14.520 [2024-11-02 11:51:14.814965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:14.520 [2024-11-02 11:51:14.814967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.779 I/O targets: 00:39:14.779 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:14.779 00:39:14.779 00:39:14.779 CUnit - A unit testing framework for C - Version 2.1-3 00:39:14.779 http://cunit.sourceforge.net/ 00:39:14.779 00:39:14.779 00:39:14.779 Suite: bdevio tests on: Nvme1n1 00:39:15.037 Test: blockdev write read block ...passed 00:39:15.037 Test: blockdev write zeroes read block ...passed 00:39:15.037 Test: blockdev write zeroes read no split ...passed 00:39:15.037 Test: blockdev write zeroes read split ...passed 00:39:15.037 Test: blockdev write zeroes read split partial ...passed 00:39:15.037 Test: blockdev reset ...[2024-11-02 11:51:15.351175] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:15.037 [2024-11-02 11:51:15.351289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7ac0 (9): Bad file descriptor 00:39:15.295 [2024-11-02 11:51:15.444487] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:15.295 passed 00:39:15.295 Test: blockdev write read 8 blocks ...passed 00:39:15.295 Test: blockdev write read size > 128k ...passed 00:39:15.295 Test: blockdev write read invalid size ...passed 00:39:15.295 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:15.295 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:15.295 Test: blockdev write read max offset ...passed 00:39:15.295 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:15.295 Test: blockdev writev readv 8 blocks ...passed 00:39:15.295 Test: blockdev writev readv 30 x 1block ...passed 00:39:15.295 Test: blockdev writev readv block ...passed 00:39:15.295 Test: blockdev writev readv size > 128k ...passed 00:39:15.295 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:15.295 Test: blockdev comparev and writev ...[2024-11-02 11:51:15.619268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.619304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.619330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.619356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.619767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.619792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.619814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.619831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.620237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.620268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.620292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.620308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.620710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.620734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:15.295 [2024-11-02 11:51:15.620755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:15.295 [2024-11-02 11:51:15.620770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:15.295 passed 00:39:15.577 Test: blockdev nvme passthru rw ...passed 00:39:15.577 Test: blockdev nvme passthru vendor specific ...[2024-11-02 11:51:15.704560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:15.577 [2024-11-02 11:51:15.704588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:15.577 [2024-11-02 11:51:15.704775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:15.577 [2024-11-02 11:51:15.704798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:15.577 [2024-11-02 11:51:15.704980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:15.577 [2024-11-02 11:51:15.705003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:15.577 [2024-11-02 11:51:15.705186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:15.577 [2024-11-02 11:51:15.705210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:15.577 passed 00:39:15.577 Test: blockdev nvme admin passthru ...passed 00:39:15.577 Test: blockdev copy ...passed 00:39:15.577 00:39:15.577 Run Summary: Type Total Ran Passed Failed Inactive 00:39:15.577 suites 1 1 n/a 0 0 00:39:15.577 tests 23 23 23 0 0 00:39:15.577 asserts 152 152 152 0 n/a 00:39:15.577 00:39:15.577 Elapsed time = 1.216 seconds 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:15.577 rmmod nvme_tcp 00:39:15.577 rmmod nvme_fabrics 00:39:15.577 rmmod nvme_keyring 00:39:15.577 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4030536 ']' 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4030536 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 4030536 ']' 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 4030536 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:15.835 11:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4030536 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4030536' 00:39:15.835 killing process with pid 4030536 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 4030536 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 4030536 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:15.835 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.094 11:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.003 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.003 00:39:18.003 real 0m6.338s 00:39:18.003 user 0m9.033s 00:39:18.003 sys 0m2.517s 00:39:18.003 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:18.003 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:18.003 ************************************ 00:39:18.003 END TEST nvmf_bdevio 00:39:18.003 ************************************ 00:39:18.003 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:18.003 00:39:18.003 real 3m53.100s 00:39:18.003 user 8m45.499s 00:39:18.003 sys 1m27.663s 00:39:18.003 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:18.003 11:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:18.003 ************************************ 00:39:18.003 END TEST nvmf_target_core_interrupt_mode 00:39:18.003 ************************************ 00:39:18.003 11:51:18 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:18.003 11:51:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:18.003 11:51:18 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:18.003 11:51:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:18.003 ************************************ 00:39:18.003 START TEST nvmf_interrupt 00:39:18.003 ************************************ 00:39:18.003 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:18.262 * Looking for test storage... 00:39:18.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:18.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.262 --rc genhtml_branch_coverage=1 00:39:18.262 --rc genhtml_function_coverage=1 00:39:18.262 --rc genhtml_legend=1 00:39:18.262 --rc geninfo_all_blocks=1 00:39:18.262 --rc geninfo_unexecuted_blocks=1 00:39:18.262 00:39:18.262 ' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:18.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.262 --rc genhtml_branch_coverage=1 00:39:18.262 --rc genhtml_function_coverage=1 00:39:18.262 --rc genhtml_legend=1 00:39:18.262 --rc geninfo_all_blocks=1 00:39:18.262 --rc geninfo_unexecuted_blocks=1 00:39:18.262 00:39:18.262 ' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:18.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.262 --rc genhtml_branch_coverage=1 00:39:18.262 --rc genhtml_function_coverage=1 00:39:18.262 --rc genhtml_legend=1 00:39:18.262 --rc geninfo_all_blocks=1 00:39:18.262 --rc geninfo_unexecuted_blocks=1 00:39:18.262 00:39:18.262 ' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:18.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.262 --rc genhtml_branch_coverage=1 00:39:18.262 --rc genhtml_function_coverage=1 00:39:18.262 --rc genhtml_legend=1 00:39:18.262 --rc geninfo_all_blocks=1 00:39:18.262 --rc geninfo_unexecuted_blocks=1 00:39:18.262 00:39:18.262 ' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.262 11:51:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:18.263 11:51:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.165 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:20.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:20.166 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:20.166 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:20.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:20.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.424 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:39:20.425 00:39:20.425 --- 10.0.0.2 ping statistics --- 00:39:20.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.425 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:39:20.425 00:39:20.425 --- 10.0.0.1 ping statistics --- 00:39:20.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.425 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4032762 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4032762 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 4032762 ']' 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:20.425 11:51:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.425 [2024-11-02 11:51:20.759664] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.425 [2024-11-02 11:51:20.760713] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:39:20.425 [2024-11-02 11:51:20.760766] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.683 [2024-11-02 11:51:20.839125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:20.683 [2024-11-02 11:51:20.889381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.683 [2024-11-02 11:51:20.889437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.683 [2024-11-02 11:51:20.889467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.683 [2024-11-02 11:51:20.889478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.683 [2024-11-02 11:51:20.889488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.683 [2024-11-02 11:51:20.890970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.683 [2024-11-02 11:51:20.890982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.683 [2024-11-02 11:51:20.985044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:20.683 [2024-11-02 11:51:20.985062] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:20.683 [2024-11-02 11:51:20.985361] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:20.683 5000+0 records in 00:39:20.683 5000+0 records out 00:39:20.683 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0135685 s, 755 MB/s 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.683 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 AIO0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 [2024-11-02 11:51:21.114942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:20.942 [2024-11-02 11:51:21.139165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4032762 0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4032762 0 idle 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032762 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.27 reactor_0' 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032762 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.27 reactor_0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4032762 1 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4032762 1 idle 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:20.942 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032766 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032766 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4032838 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4032762 0 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4032762 0 busy 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:21.201 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032762 root 20 0 128.2g 48384 34560 R 73.3 0.1 0:00.38 reactor_0' 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032762 root 20 0 128.2g 48384 34560 R 73.3 0.1 0:00.38 reactor_0 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4032762 1 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4032762 1 busy 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032766 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.22 reactor_1' 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032766 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.22 reactor_1 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:21.461 11:51:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4032838 00:39:31.433 Initializing NVMe Controllers 00:39:31.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:31.433 Controller IO queue size 256, less than required. 00:39:31.433 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:31.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:31.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:31.433 Initialization complete. Launching workers. 00:39:31.433 ======================================================== 00:39:31.433 Latency(us) 00:39:31.433 Device Information : IOPS MiB/s Average min max 00:39:31.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13246.78 51.75 19340.20 4030.23 24068.23 00:39:31.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13599.18 53.12 18837.08 4542.25 21588.78 00:39:31.433 ======================================================== 00:39:31.433 Total : 26845.97 104.87 19085.34 4030.23 24068.23 00:39:31.433 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4032762 0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4032762 0 idle 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032762 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:19.85 reactor_0' 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032762 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:19.85 reactor_0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4032762 1 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4032762 1 idle 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:31.433 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032766 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.61 reactor_1' 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032766 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.61 reactor_1 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:31.692 11:51:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:31.949 11:51:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:31.949 11:51:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:39:31.950 11:51:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:39:31.950 11:51:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:39:31.950 11:51:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4032762 0 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4032762 0 idle 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:33.868 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:33.869 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:33.869 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:33.869 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032762 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:19.95 reactor_0' 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032762 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:19.95 reactor_0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4032762 1 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4032762 1 idle 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4032762 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4032762 -w 256 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4032766 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:09.64 reactor_1' 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4032766 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:09.64 reactor_1 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:34.173 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:34.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.457 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.458 rmmod nvme_tcp 00:39:34.458 rmmod nvme_fabrics 00:39:34.458 rmmod nvme_keyring 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4032762 ']' 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4032762 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 4032762 ']' 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 4032762 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4032762 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4032762' 00:39:34.458 killing process with pid 4032762 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 4032762 00:39:34.458 11:51:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 4032762 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:34.717 11:51:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.253 11:51:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.253 00:39:37.253 real 0m18.769s 00:39:37.253 user 0m36.203s 00:39:37.253 sys 0m6.913s 00:39:37.253 11:51:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:37.253 11:51:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:37.253 ************************************ 00:39:37.253 END TEST nvmf_interrupt 00:39:37.253 ************************************ 00:39:37.253 00:39:37.253 real 32m56.512s 00:39:37.253 user 87m2.785s 00:39:37.253 sys 8m8.596s 00:39:37.253 11:51:37 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:37.253 11:51:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.253 ************************************ 00:39:37.253 END TEST nvmf_tcp 00:39:37.253 ************************************ 00:39:37.253 11:51:37 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:37.253 11:51:37 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:37.253 11:51:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:37.253 11:51:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:37.253 11:51:37 -- common/autotest_common.sh@10 -- # set +x 00:39:37.253 ************************************ 00:39:37.253 START TEST spdkcli_nvmf_tcp 00:39:37.253 ************************************ 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:37.253 * Looking for test storage... 00:39:37.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.253 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:37.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.253 --rc genhtml_branch_coverage=1 00:39:37.254 --rc genhtml_function_coverage=1 00:39:37.254 --rc genhtml_legend=1 00:39:37.254 --rc geninfo_all_blocks=1 00:39:37.254 --rc geninfo_unexecuted_blocks=1 00:39:37.254 00:39:37.254 ' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:37.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.254 --rc genhtml_branch_coverage=1 00:39:37.254 --rc genhtml_function_coverage=1 00:39:37.254 --rc genhtml_legend=1 00:39:37.254 --rc geninfo_all_blocks=1 00:39:37.254 --rc geninfo_unexecuted_blocks=1 00:39:37.254 00:39:37.254 ' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:37.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.254 --rc genhtml_branch_coverage=1 00:39:37.254 --rc genhtml_function_coverage=1 00:39:37.254 --rc genhtml_legend=1 00:39:37.254 --rc geninfo_all_blocks=1 00:39:37.254 --rc geninfo_unexecuted_blocks=1 00:39:37.254 00:39:37.254 ' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:37.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.254 --rc genhtml_branch_coverage=1 00:39:37.254 --rc genhtml_function_coverage=1 00:39:37.254 --rc genhtml_legend=1 00:39:37.254 --rc geninfo_all_blocks=1 00:39:37.254 --rc geninfo_unexecuted_blocks=1 00:39:37.254 00:39:37.254 ' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:37.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4034807 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4034807 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 4034807 ']' 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.254 [2024-11-02 11:51:37.392241] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:39:37.254 [2024-11-02 11:51:37.392357] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034807 ] 00:39:37.254 [2024-11-02 11:51:37.457998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:37.254 [2024-11-02 11:51:37.510549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.254 [2024-11-02 11:51:37.510558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:37.254 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.513 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:37.513 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:37.513 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:37.513 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:37.513 11:51:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.513 11:51:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:37.513 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:37.513 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:37.513 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:37.513 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:37.513 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:37.513 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:37.513 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:37.513 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:37.513 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:37.513 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:37.513 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:37.513 ' 00:39:40.043 [2024-11-02 11:51:40.294817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.418 [2024-11-02 11:51:41.583354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:43.950 [2024-11-02 11:51:43.962624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:45.853 [2024-11-02 11:51:46.013235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:47.229 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:47.229 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:47.229 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:47.229 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:47.229 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:47.229 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:47.229 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:47.229 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:47.229 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:47.229 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:47.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:47.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:47.230 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:47.488 11:51:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:48.054 11:51:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:48.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:48.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:48.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:48.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:48.055 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:48.055 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:48.055 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:48.055 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:48.055 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:48.055 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:48.055 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:48.055 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:48.055 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:48.055 ' 00:39:53.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:53.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:53.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:53.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:53.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:53.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:53.327 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:53.327 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:53.327 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:53.327 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:53.327 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:53.327 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:53.327 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:53.327 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4034807 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 4034807 ']' 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 4034807 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:53.327 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4034807 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4034807' 00:39:53.586 killing process with pid 4034807 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 4034807 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 4034807 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4034807 ']' 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4034807 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 4034807 ']' 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 4034807 00:39:53.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4034807) - No such process 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 4034807 is not found' 00:39:53.586 Process with pid 4034807 is not found 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:53.586 00:39:53.586 real 0m16.725s 00:39:53.586 user 0m35.863s 00:39:53.586 sys 0m0.797s 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:53.586 11:51:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:53.586 ************************************ 00:39:53.586 END TEST spdkcli_nvmf_tcp 00:39:53.586 ************************************ 00:39:53.586 11:51:53 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:53.586 11:51:53 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:53.586 11:51:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:53.586 11:51:53 -- common/autotest_common.sh@10 -- # set +x 00:39:53.586 ************************************ 00:39:53.586 START TEST nvmf_identify_passthru 00:39:53.586 ************************************ 00:39:53.586 11:51:53 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:53.845 * Looking for test storage... 00:39:53.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:53.845 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:53.845 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:39:53.845 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:53.845 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.845 11:51:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:39:53.845 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.845 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.845 --rc genhtml_branch_coverage=1 00:39:53.845 --rc genhtml_function_coverage=1 00:39:53.845 --rc genhtml_legend=1 00:39:53.845 --rc geninfo_all_blocks=1 00:39:53.845 --rc geninfo_unexecuted_blocks=1 00:39:53.845 00:39:53.845 ' 00:39:53.846 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.846 --rc genhtml_branch_coverage=1 00:39:53.846 --rc genhtml_function_coverage=1 00:39:53.846 --rc genhtml_legend=1 00:39:53.846 --rc geninfo_all_blocks=1 00:39:53.846 --rc geninfo_unexecuted_blocks=1 00:39:53.846 00:39:53.846 ' 00:39:53.846 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.846 --rc genhtml_branch_coverage=1 00:39:53.846 --rc genhtml_function_coverage=1 00:39:53.846 --rc genhtml_legend=1 00:39:53.846 --rc geninfo_all_blocks=1 00:39:53.846 --rc geninfo_unexecuted_blocks=1 00:39:53.846 00:39:53.846 ' 00:39:53.846 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.846 --rc genhtml_branch_coverage=1 00:39:53.846 --rc genhtml_function_coverage=1 00:39:53.846 --rc genhtml_legend=1 00:39:53.846 --rc geninfo_all_blocks=1 00:39:53.846 --rc geninfo_unexecuted_blocks=1 00:39:53.846 00:39:53.846 ' 00:39:53.846 11:51:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:53.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.846 11:51:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.846 11:51:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:53.846 11:51:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.846 11:51:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.846 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:53.846 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:53.846 11:51:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:39:53.846 11:51:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:55.746 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:55.746 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:55.746 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:55.746 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:55.746 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:55.747 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:39:56.007 00:39:56.007 --- 10.0.0.2 ping statistics --- 00:39:56.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.007 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:39:56.007 00:39:56.007 --- 10.0.0.1 ping statistics --- 00:39:56.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.007 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.007 11:51:56 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:39:56.007 11:51:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:56.007 11:51:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:00.208 11:52:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:00.208 11:52:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:00.208 11:52:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:00.208 11:52:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4039434 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:04.396 11:52:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4039434 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 4039434 ']' 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:04.396 11:52:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.654 [2024-11-02 11:52:04.835842] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:40:04.654 [2024-11-02 11:52:04.835927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.654 [2024-11-02 11:52:04.916931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:04.654 [2024-11-02 11:52:04.969743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:04.654 [2024-11-02 11:52:04.969809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:04.654 [2024-11-02 11:52:04.969826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:04.654 [2024-11-02 11:52:04.969839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:04.654 [2024-11-02 11:52:04.969851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:04.654 [2024-11-02 11:52:04.971639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.654 [2024-11-02 11:52:04.971694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:04.654 [2024-11-02 11:52:04.971768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.654 [2024-11-02 11:52:04.971764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:40:04.915 11:52:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.915 INFO: Log level set to 20 00:40:04.915 INFO: Requests: 00:40:04.915 { 00:40:04.915 "jsonrpc": "2.0", 00:40:04.915 "method": "nvmf_set_config", 00:40:04.915 "id": 1, 00:40:04.915 "params": { 00:40:04.915 "admin_cmd_passthru": { 00:40:04.915 "identify_ctrlr": true 00:40:04.915 } 00:40:04.915 } 00:40:04.915 } 00:40:04.915 00:40:04.915 INFO: response: 00:40:04.915 { 00:40:04.915 "jsonrpc": "2.0", 00:40:04.915 "id": 1, 00:40:04.915 "result": true 00:40:04.915 } 00:40:04.915 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.915 11:52:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.915 INFO: Setting log level to 20 00:40:04.915 INFO: Setting log level to 20 00:40:04.915 INFO: Log level set to 20 00:40:04.915 INFO: Log level set to 20 00:40:04.915 INFO: Requests: 00:40:04.915 { 00:40:04.915 "jsonrpc": "2.0", 00:40:04.915 "method": "framework_start_init", 00:40:04.915 "id": 1 00:40:04.915 } 00:40:04.915 00:40:04.915 INFO: Requests: 00:40:04.915 { 00:40:04.915 "jsonrpc": "2.0", 00:40:04.915 "method": "framework_start_init", 00:40:04.915 "id": 1 00:40:04.915 } 00:40:04.915 00:40:04.915 [2024-11-02 11:52:05.180469] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:04.915 INFO: response: 00:40:04.915 { 00:40:04.915 "jsonrpc": "2.0", 00:40:04.915 "id": 1, 00:40:04.915 "result": true 00:40:04.915 } 00:40:04.915 00:40:04.915 INFO: response: 00:40:04.915 { 00:40:04.915 "jsonrpc": "2.0", 00:40:04.915 "id": 1, 00:40:04.915 "result": true 00:40:04.915 } 00:40:04.915 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.915 11:52:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.915 INFO: Setting log level to 40 00:40:04.915 INFO: Setting log level to 40 00:40:04.915 INFO: Setting log level to 40 00:40:04.915 [2024-11-02 11:52:05.190672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.915 11:52:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:04.915 11:52:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.915 11:52:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:08.203 Nvme0n1 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:08.203 [2024-11-02 11:52:08.107519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:08.203 [ 00:40:08.203 { 00:40:08.203 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:08.203 "subtype": "Discovery", 00:40:08.203 "listen_addresses": [], 00:40:08.203 "allow_any_host": true, 00:40:08.203 "hosts": [] 00:40:08.203 }, 00:40:08.203 { 00:40:08.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:08.203 "subtype": "NVMe", 00:40:08.203 "listen_addresses": [ 00:40:08.203 { 00:40:08.203 "trtype": "TCP", 00:40:08.203 "adrfam": "IPv4", 00:40:08.203 "traddr": "10.0.0.2", 00:40:08.203 "trsvcid": "4420" 00:40:08.203 } 00:40:08.203 ], 00:40:08.203 "allow_any_host": true, 00:40:08.203 "hosts": [], 00:40:08.203 "serial_number": "SPDK00000000000001", 00:40:08.203 "model_number": "SPDK bdev Controller", 00:40:08.203 "max_namespaces": 1, 00:40:08.203 "min_cntlid": 1, 00:40:08.203 "max_cntlid": 65519, 00:40:08.203 "namespaces": [ 00:40:08.203 { 00:40:08.203 "nsid": 1, 00:40:08.203 "bdev_name": "Nvme0n1", 00:40:08.203 "name": "Nvme0n1", 00:40:08.203 "nguid": "0A85E848A9054CA2A8112E4F78B3F80F", 00:40:08.203 "uuid": "0a85e848-a905-4ca2-a811-2e4f78b3f80f" 00:40:08.203 } 00:40:08.203 ] 00:40:08.203 } 00:40:08.203 ] 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:08.203 11:52:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.203 rmmod nvme_tcp 00:40:08.203 rmmod nvme_fabrics 00:40:08.203 rmmod nvme_keyring 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4039434 ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4039434 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 4039434 ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 4039434 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4039434 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4039434' 00:40:08.203 killing process with pid 4039434 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 4039434 00:40:08.203 11:52:08 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 4039434 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:10.107 11:52:10 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.107 11:52:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:10.107 11:52:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.062 11:52:12 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:12.062 00:40:12.062 real 0m18.095s 00:40:12.062 user 0m26.883s 00:40:12.062 sys 0m2.303s 00:40:12.062 11:52:12 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:12.062 11:52:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:12.062 ************************************ 00:40:12.062 END TEST nvmf_identify_passthru 00:40:12.062 ************************************ 00:40:12.062 11:52:12 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:12.062 11:52:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:12.062 11:52:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:12.062 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:40:12.062 ************************************ 00:40:12.062 START TEST nvmf_dif 00:40:12.062 ************************************ 00:40:12.062 11:52:12 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:12.062 * Looking for test storage... 00:40:12.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:12.062 11:52:12 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:12.062 11:52:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:40:12.062 11:52:12 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:12.062 11:52:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.062 11:52:12 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.063 --rc genhtml_branch_coverage=1 00:40:12.063 --rc genhtml_function_coverage=1 00:40:12.063 --rc genhtml_legend=1 00:40:12.063 --rc geninfo_all_blocks=1 00:40:12.063 --rc geninfo_unexecuted_blocks=1 00:40:12.063 00:40:12.063 ' 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.063 --rc genhtml_branch_coverage=1 00:40:12.063 --rc genhtml_function_coverage=1 00:40:12.063 --rc genhtml_legend=1 00:40:12.063 --rc geninfo_all_blocks=1 00:40:12.063 --rc geninfo_unexecuted_blocks=1 00:40:12.063 00:40:12.063 ' 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.063 --rc genhtml_branch_coverage=1 00:40:12.063 --rc genhtml_function_coverage=1 00:40:12.063 --rc genhtml_legend=1 00:40:12.063 --rc geninfo_all_blocks=1 00:40:12.063 --rc geninfo_unexecuted_blocks=1 00:40:12.063 00:40:12.063 ' 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:12.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.063 --rc genhtml_branch_coverage=1 00:40:12.063 --rc genhtml_function_coverage=1 00:40:12.063 --rc genhtml_legend=1 00:40:12.063 --rc geninfo_all_blocks=1 00:40:12.063 --rc geninfo_unexecuted_blocks=1 00:40:12.063 00:40:12.063 ' 00:40:12.063 11:52:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.063 11:52:12 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.063 11:52:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.063 11:52:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.063 11:52:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.063 11:52:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:12.063 11:52:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:12.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:12.063 11:52:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:12.063 11:52:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:12.063 11:52:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:12.063 11:52:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:12.063 11:52:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:12.063 11:52:12 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:12.063 11:52:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:13.969 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:13.969 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:13.969 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:13.969 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:13.969 11:52:14 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:13.970 11:52:14 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:14.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:14.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:40:14.229 00:40:14.229 --- 10.0.0.2 ping statistics --- 00:40:14.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.229 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:14.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:14.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:40:14.229 00:40:14.229 --- 10.0.0.1 ping statistics --- 00:40:14.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:14.229 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:14.229 11:52:14 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:15.165 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:15.165 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:15.165 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:15.166 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:15.166 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:15.166 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:15.166 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:15.166 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:15.166 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:15.166 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:15.166 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:15.166 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:15.166 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:15.166 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:15.166 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:15.166 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:15.166 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:15.426 11:52:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:15.426 11:52:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4042699 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:15.426 11:52:15 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4042699 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 4042699 ']' 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:15.426 11:52:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:15.426 [2024-11-02 11:52:15.801018] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:40:15.426 [2024-11-02 11:52:15.801118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.685 [2024-11-02 11:52:15.877193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.685 [2024-11-02 11:52:15.923707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:15.685 [2024-11-02 11:52:15.923753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:15.685 [2024-11-02 11:52:15.923782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:15.685 [2024-11-02 11:52:15.923793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:15.685 [2024-11-02 11:52:15.923802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:15.685 [2024-11-02 11:52:15.924394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:40:15.685 11:52:16 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:15.685 11:52:16 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:15.685 11:52:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:15.685 11:52:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:15.685 [2024-11-02 11:52:16.064254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.685 11:52:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:15.685 11:52:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:15.944 ************************************ 00:40:15.944 START TEST fio_dif_1_default 00:40:15.944 ************************************ 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.944 bdev_null0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:15.944 [2024-11-02 11:52:16.120583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:15.944 { 00:40:15.944 "params": { 00:40:15.944 "name": "Nvme$subsystem", 00:40:15.944 "trtype": "$TEST_TRANSPORT", 00:40:15.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:15.944 "adrfam": "ipv4", 00:40:15.944 "trsvcid": "$NVMF_PORT", 00:40:15.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:15.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:15.944 "hdgst": ${hdgst:-false}, 00:40:15.944 "ddgst": ${ddgst:-false} 00:40:15.944 }, 00:40:15.944 "method": "bdev_nvme_attach_controller" 00:40:15.944 } 00:40:15.944 EOF 00:40:15.944 )") 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:15.944 11:52:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:15.944 "params": { 00:40:15.944 "name": "Nvme0", 00:40:15.944 "trtype": "tcp", 00:40:15.944 "traddr": "10.0.0.2", 00:40:15.944 "adrfam": "ipv4", 00:40:15.944 "trsvcid": "4420", 00:40:15.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:15.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:15.945 "hdgst": false, 00:40:15.945 "ddgst": false 00:40:15.945 }, 00:40:15.945 "method": "bdev_nvme_attach_controller" 00:40:15.945 }' 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:15.945 11:52:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:16.203 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:16.203 fio-3.35 00:40:16.203 Starting 1 thread 00:40:28.400 00:40:28.400 filename0: (groupid=0, jobs=1): err= 0: pid=4042927: Sat Nov 2 11:52:26 2024 00:40:28.400 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10007msec) 00:40:28.400 slat (nsec): min=3610, max=63984, avg=8923.36, stdev=3989.74 00:40:28.400 clat (usec): min=727, max=46957, avg=21083.46, stdev=20179.53 00:40:28.400 lat (usec): min=734, max=46981, avg=21092.39, stdev=20179.63 00:40:28.400 clat percentiles (usec): 00:40:28.400 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 783], 00:40:28.400 | 30.00th=[ 807], 40.00th=[ 922], 50.00th=[41157], 60.00th=[41157], 00:40:28.400 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:28.400 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:40:28.400 | 99.99th=[46924] 00:40:28.400 bw ( KiB/s): min= 672, max= 768, per=99.75%, avg=756.80, stdev=28.00, samples=20 00:40:28.400 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:40:28.400 lat (usec) : 750=4.38%, 1000=44.57% 00:40:28.400 lat (msec) : 2=0.84%, 50=50.21% 00:40:28.400 cpu : usr=91.19%, sys=8.48%, ctx=25, majf=0, minf=329 00:40:28.400 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.400 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.400 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:28.400 00:40:28.400 Run status group 0 (all jobs): 00:40:28.400 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10007-10007msec 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.400 00:40:28.400 real 0m11.173s 00:40:28.400 user 0m10.276s 00:40:28.400 sys 0m1.110s 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:28.400 ************************************ 00:40:28.400 END TEST fio_dif_1_default 00:40:28.400 ************************************ 00:40:28.400 11:52:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:28.400 11:52:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:28.400 11:52:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:28.400 11:52:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:28.400 ************************************ 00:40:28.400 START TEST fio_dif_1_multi_subsystems 00:40:28.400 ************************************ 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.400 bdev_null0 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.400 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.401 [2024-11-02 11:52:27.342573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.401 bdev_null1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:28.401 { 00:40:28.401 "params": { 00:40:28.401 "name": "Nvme$subsystem", 00:40:28.401 "trtype": "$TEST_TRANSPORT", 00:40:28.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:28.401 "adrfam": "ipv4", 00:40:28.401 "trsvcid": "$NVMF_PORT", 00:40:28.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:28.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:28.401 "hdgst": ${hdgst:-false}, 00:40:28.401 "ddgst": ${ddgst:-false} 00:40:28.401 }, 00:40:28.401 "method": "bdev_nvme_attach_controller" 00:40:28.401 } 00:40:28.401 EOF 00:40:28.401 )") 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:28.401 { 00:40:28.401 "params": { 00:40:28.401 "name": "Nvme$subsystem", 00:40:28.401 "trtype": "$TEST_TRANSPORT", 00:40:28.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:28.401 "adrfam": "ipv4", 00:40:28.401 "trsvcid": "$NVMF_PORT", 00:40:28.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:28.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:28.401 "hdgst": ${hdgst:-false}, 00:40:28.401 "ddgst": ${ddgst:-false} 00:40:28.401 }, 00:40:28.401 "method": "bdev_nvme_attach_controller" 00:40:28.401 } 00:40:28.401 EOF 00:40:28.401 )") 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:28.401 "params": { 00:40:28.401 "name": "Nvme0", 00:40:28.401 "trtype": "tcp", 00:40:28.401 "traddr": "10.0.0.2", 00:40:28.401 "adrfam": "ipv4", 00:40:28.401 "trsvcid": "4420", 00:40:28.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:28.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:28.401 "hdgst": false, 00:40:28.401 "ddgst": false 00:40:28.401 }, 00:40:28.401 "method": "bdev_nvme_attach_controller" 00:40:28.401 },{ 00:40:28.401 "params": { 00:40:28.401 "name": "Nvme1", 00:40:28.401 "trtype": "tcp", 00:40:28.401 "traddr": "10.0.0.2", 00:40:28.401 "adrfam": "ipv4", 00:40:28.401 "trsvcid": "4420", 00:40:28.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:28.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:28.401 "hdgst": false, 00:40:28.401 "ddgst": false 00:40:28.401 }, 00:40:28.401 "method": "bdev_nvme_attach_controller" 00:40:28.401 }' 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:28.401 11:52:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:28.401 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:28.401 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:28.401 fio-3.35 00:40:28.401 Starting 2 threads 00:40:38.368 00:40:38.368 filename0: (groupid=0, jobs=1): err= 0: pid=4044332: Sat Nov 2 11:52:38 2024 00:40:38.368 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10005msec) 00:40:38.368 slat (nsec): min=5968, max=54581, avg=9021.81, stdev=3056.04 00:40:38.368 clat (usec): min=751, max=46474, avg=21033.95, stdev=20156.55 00:40:38.368 lat (usec): min=759, max=46500, avg=21042.97, stdev=20156.36 00:40:38.368 clat percentiles (usec): 00:40:38.368 | 1.00th=[ 775], 5.00th=[ 791], 10.00th=[ 799], 20.00th=[ 840], 00:40:38.368 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:40:38.368 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:38.368 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:40:38.368 | 99.99th=[46400] 00:40:38.368 bw ( KiB/s): min= 704, max= 768, per=66.21%, avg=758.40, stdev=23.45, samples=20 00:40:38.368 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:40:38.368 lat (usec) : 1000=49.47% 00:40:38.368 lat (msec) : 2=0.42%, 50=50.11% 00:40:38.368 cpu : usr=94.82%, sys=4.87%, ctx=13, majf=0, minf=198 00:40:38.368 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.368 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.368 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:38.368 filename1: (groupid=0, jobs=1): err= 0: pid=4044333: Sat Nov 2 11:52:38 2024 00:40:38.368 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10006msec) 00:40:38.368 slat (nsec): min=6905, max=29119, avg=9362.63, stdev=3404.53 00:40:38.368 clat (usec): min=40861, max=46481, avg=41487.17, stdev=602.06 00:40:38.368 lat (usec): min=40868, max=46507, avg=41496.53, stdev=602.21 00:40:38.368 clat percentiles (usec): 00:40:38.368 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:38.368 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:40:38.368 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:38.368 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:40:38.368 | 99.99th=[46400] 00:40:38.368 bw ( KiB/s): min= 352, max= 416, per=33.54%, avg=384.00, stdev=10.38, samples=20 00:40:38.368 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:40:38.368 lat (msec) : 50=100.00% 00:40:38.368 cpu : usr=94.69%, sys=5.00%, ctx=16, majf=0, minf=82 00:40:38.368 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.368 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.368 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:38.368 00:40:38.368 Run status group 0 (all jobs): 00:40:38.368 READ: bw=1145KiB/s (1172kB/s), 385KiB/s-760KiB/s (395kB/s-778kB/s), io=11.2MiB (11.7MB), run=10005-10006msec 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.368 00:40:38.368 real 0m11.332s 00:40:38.368 user 0m20.238s 00:40:38.368 sys 0m1.272s 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:38.368 11:52:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:38.368 ************************************ 00:40:38.368 END TEST fio_dif_1_multi_subsystems 00:40:38.368 ************************************ 00:40:38.368 11:52:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:38.368 11:52:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:38.368 11:52:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:38.368 11:52:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:38.368 ************************************ 00:40:38.368 START TEST fio_dif_rand_params 00:40:38.369 ************************************ 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.369 bdev_null0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.369 [2024-11-02 11:52:38.729373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:38.369 { 00:40:38.369 "params": { 00:40:38.369 "name": "Nvme$subsystem", 00:40:38.369 "trtype": "$TEST_TRANSPORT", 00:40:38.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:38.369 "adrfam": "ipv4", 00:40:38.369 "trsvcid": "$NVMF_PORT", 00:40:38.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:38.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:38.369 "hdgst": ${hdgst:-false}, 00:40:38.369 "ddgst": ${ddgst:-false} 00:40:38.369 }, 00:40:38.369 "method": "bdev_nvme_attach_controller" 00:40:38.369 } 00:40:38.369 EOF 00:40:38.369 )") 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:38.369 "params": { 00:40:38.369 "name": "Nvme0", 00:40:38.369 "trtype": "tcp", 00:40:38.369 "traddr": "10.0.0.2", 00:40:38.369 "adrfam": "ipv4", 00:40:38.369 "trsvcid": "4420", 00:40:38.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:38.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:38.369 "hdgst": false, 00:40:38.369 "ddgst": false 00:40:38.369 }, 00:40:38.369 "method": "bdev_nvme_attach_controller" 00:40:38.369 }' 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:38.369 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:38.627 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:38.627 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:38.627 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:38.627 11:52:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.627 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:38.627 ... 00:40:38.627 fio-3.35 00:40:38.627 Starting 3 threads 00:40:45.188 00:40:45.188 filename0: (groupid=0, jobs=1): err= 0: pid=4045727: Sat Nov 2 11:52:44 2024 00:40:45.188 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(116MiB/5006msec) 00:40:45.188 slat (nsec): min=4923, max=47068, avg=18882.79, stdev=5825.41 00:40:45.188 clat (usec): min=5599, max=88871, avg=16207.66, stdev=13535.86 00:40:45.188 lat (usec): min=5612, max=88900, avg=16226.54, stdev=13535.65 00:40:45.188 clat percentiles (usec): 00:40:45.188 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[ 9110], 00:40:45.188 | 30.00th=[ 9896], 40.00th=[11207], 50.00th=[12125], 60.00th=[13042], 00:40:45.188 | 70.00th=[13960], 80.00th=[15270], 90.00th=[47973], 95.00th=[51643], 00:40:45.188 | 99.00th=[56361], 99.50th=[58459], 99.90th=[88605], 99.95th=[88605], 00:40:45.188 | 99.99th=[88605] 00:40:45.188 bw ( KiB/s): min=12288, max=32768, per=32.38%, avg=23603.20, stdev=6778.28, samples=10 00:40:45.188 iops : min= 96, max= 256, avg=184.40, stdev=52.96, samples=10 00:40:45.188 lat (msec) : 10=30.49%, 20=58.27%, 50=3.89%, 100=7.35% 00:40:45.188 cpu : usr=93.97%, sys=5.53%, ctx=12, majf=0, minf=117 00:40:45.188 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:45.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.188 issued rwts: total=925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:45.188 filename0: (groupid=0, jobs=1): err= 0: pid=4045728: Sat Nov 2 11:52:44 2024 00:40:45.188 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(134MiB/5045msec) 00:40:45.188 slat (nsec): min=4517, max=42356, avg=14588.87, stdev=4982.14 00:40:45.188 clat (usec): min=4796, max=88447, avg=14018.17, stdev=11873.31 00:40:45.188 lat (usec): min=4809, max=88459, avg=14032.76, stdev=11873.26 00:40:45.188 clat percentiles (usec): 00:40:45.188 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 8225], 00:40:45.188 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11863], 00:40:45.188 | 70.00th=[13042], 80.00th=[14091], 90.00th=[17171], 95.00th=[49546], 00:40:45.188 | 99.00th=[53216], 99.50th=[54789], 99.90th=[88605], 99.95th=[88605], 00:40:45.188 | 99.99th=[88605] 00:40:45.188 bw ( KiB/s): min=20224, max=34304, per=37.68%, avg=27468.80, stdev=4138.53, samples=10 00:40:45.188 iops : min= 158, max= 268, avg=214.60, stdev=32.33, samples=10 00:40:45.188 lat (msec) : 10=42.14%, 20=49.86%, 50=3.72%, 100=4.28% 00:40:45.188 cpu : usr=93.26%, sys=6.28%, ctx=10, majf=0, minf=140 00:40:45.188 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:45.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.188 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:45.188 filename0: (groupid=0, jobs=1): err= 0: pid=4045729: Sat Nov 2 11:52:44 2024 00:40:45.188 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(109MiB/5008msec) 00:40:45.188 slat (nsec): min=4996, max=75297, avg=14843.21, stdev=5657.93 00:40:45.188 clat (usec): min=6154, max=93978, avg=17187.59, stdev=14050.36 00:40:45.188 lat (usec): min=6167, max=93991, avg=17202.43, stdev=14049.90 00:40:45.188 clat percentiles (usec): 00:40:45.188 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9634], 00:40:45.188 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12256], 60.00th=[12911], 00:40:45.188 | 70.00th=[14091], 80.00th=[15926], 90.00th=[49546], 95.00th=[52167], 00:40:45.188 | 99.00th=[54789], 99.50th=[55837], 99.90th=[93848], 99.95th=[93848], 00:40:45.188 | 99.99th=[93848] 00:40:45.188 bw ( KiB/s): min=18432, max=28672, per=30.55%, avg=22272.00, stdev=3758.54, samples=10 00:40:45.188 iops : min= 144, max= 224, avg=174.00, stdev=29.36, samples=10 00:40:45.188 lat (msec) : 10=24.51%, 20=62.43%, 50=4.35%, 100=8.71% 00:40:45.188 cpu : usr=93.95%, sys=5.59%, ctx=9, majf=0, minf=99 00:40:45.188 IO depths : 1=3.4%, 2=96.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:45.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.188 issued rwts: total=873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:45.188 00:40:45.188 Run status group 0 (all jobs): 00:40:45.188 READ: bw=71.2MiB/s (74.6MB/s), 21.8MiB/s-26.6MiB/s (22.8MB/s-27.9MB/s), io=359MiB (377MB), run=5006-5045msec 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 bdev_null0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 [2024-11-02 11:52:44.810470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 bdev_null1 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.188 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.189 bdev_null2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.189 { 00:40:45.189 "params": { 00:40:45.189 "name": "Nvme$subsystem", 00:40:45.189 "trtype": "$TEST_TRANSPORT", 00:40:45.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.189 "adrfam": "ipv4", 00:40:45.189 "trsvcid": "$NVMF_PORT", 00:40:45.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.189 "hdgst": ${hdgst:-false}, 00:40:45.189 "ddgst": ${ddgst:-false} 00:40:45.189 }, 00:40:45.189 "method": "bdev_nvme_attach_controller" 00:40:45.189 } 00:40:45.189 EOF 00:40:45.189 )") 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.189 { 00:40:45.189 "params": { 00:40:45.189 "name": "Nvme$subsystem", 00:40:45.189 "trtype": "$TEST_TRANSPORT", 00:40:45.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.189 "adrfam": "ipv4", 00:40:45.189 "trsvcid": "$NVMF_PORT", 00:40:45.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.189 "hdgst": ${hdgst:-false}, 00:40:45.189 "ddgst": ${ddgst:-false} 00:40:45.189 }, 00:40:45.189 "method": "bdev_nvme_attach_controller" 00:40:45.189 } 00:40:45.189 EOF 00:40:45.189 )") 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.189 { 00:40:45.189 "params": { 00:40:45.189 "name": "Nvme$subsystem", 00:40:45.189 "trtype": "$TEST_TRANSPORT", 00:40:45.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.189 "adrfam": "ipv4", 00:40:45.189 "trsvcid": "$NVMF_PORT", 00:40:45.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.189 "hdgst": ${hdgst:-false}, 00:40:45.189 "ddgst": ${ddgst:-false} 00:40:45.189 }, 00:40:45.189 "method": "bdev_nvme_attach_controller" 00:40:45.189 } 00:40:45.189 EOF 00:40:45.189 )") 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:45.189 "params": { 00:40:45.189 "name": "Nvme0", 00:40:45.189 "trtype": "tcp", 00:40:45.189 "traddr": "10.0.0.2", 00:40:45.189 "adrfam": "ipv4", 00:40:45.189 "trsvcid": "4420", 00:40:45.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:45.189 "hdgst": false, 00:40:45.189 "ddgst": false 00:40:45.189 }, 00:40:45.189 "method": "bdev_nvme_attach_controller" 00:40:45.189 },{ 00:40:45.189 "params": { 00:40:45.189 "name": "Nvme1", 00:40:45.189 "trtype": "tcp", 00:40:45.189 "traddr": "10.0.0.2", 00:40:45.189 "adrfam": "ipv4", 00:40:45.189 "trsvcid": "4420", 00:40:45.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:45.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:45.189 "hdgst": false, 00:40:45.189 "ddgst": false 00:40:45.189 }, 00:40:45.189 "method": "bdev_nvme_attach_controller" 00:40:45.189 },{ 00:40:45.189 "params": { 00:40:45.189 "name": "Nvme2", 00:40:45.189 "trtype": "tcp", 00:40:45.189 "traddr": "10.0.0.2", 00:40:45.189 "adrfam": "ipv4", 00:40:45.189 "trsvcid": "4420", 00:40:45.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:45.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:45.189 "hdgst": false, 00:40:45.189 "ddgst": false 00:40:45.189 }, 00:40:45.189 "method": "bdev_nvme_attach_controller" 00:40:45.189 }' 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:45.189 11:52:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.189 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:45.189 ... 00:40:45.189 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:45.190 ... 00:40:45.190 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:45.190 ... 00:40:45.190 fio-3.35 00:40:45.190 Starting 24 threads 00:40:57.558 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046583: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=427, BW=1711KiB/s (1752kB/s)(16.8MiB/10023msec) 00:40:57.558 slat (usec): min=10, max=185, avg=43.96, stdev=19.16 00:40:57.558 clat (msec): min=22, max=238, avg=37.01, stdev=22.90 00:40:57.558 lat (msec): min=23, max=239, avg=37.05, stdev=22.90 00:40:57.558 clat percentiles (msec): 00:40:57.558 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.558 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.558 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.558 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 232], 99.95th=[ 234], 00:40:57.558 | 99.99th=[ 239] 00:40:57.558 bw ( KiB/s): min= 384, max= 1920, per=4.18%, avg=1708.80, stdev=517.49, samples=20 00:40:57.558 iops : min= 96, max= 480, avg=427.20, stdev=129.37, samples=20 00:40:57.558 lat (msec) : 50=96.64%, 100=0.75%, 250=2.61% 00:40:57.558 cpu : usr=93.57%, sys=3.56%, ctx=471, majf=0, minf=45 00:40:57.558 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046584: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10008msec) 00:40:57.558 slat (usec): min=11, max=174, avg=36.44, stdev=11.44 00:40:57.558 clat (msec): min=22, max=239, avg=37.30, stdev=24.52 00:40:57.558 lat (msec): min=22, max=239, avg=37.33, stdev=24.52 00:40:57.558 clat percentiles (msec): 00:40:57.558 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.558 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.558 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.558 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 197], 99.95th=[ 197], 00:40:57.558 | 99.99th=[ 241] 00:40:57.558 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1696.00, stdev=552.31, samples=20 00:40:57.558 iops : min= 64, max= 512, avg=424.00, stdev=138.08, samples=20 00:40:57.558 lat (msec) : 50=96.99%, 100=0.05%, 250=2.96% 00:40:57.558 cpu : usr=96.87%, sys=1.96%, ctx=71, majf=0, minf=38 00:40:57.558 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:57.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046585: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10012msec) 00:40:57.558 slat (nsec): min=8076, max=81541, avg=33966.67, stdev=9228.02 00:40:57.558 clat (msec): min=9, max=245, avg=37.18, stdev=25.00 00:40:57.558 lat (msec): min=9, max=245, avg=37.21, stdev=25.00 00:40:57.558 clat percentiles (msec): 00:40:57.558 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.558 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.558 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.558 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 245], 99.95th=[ 245], 00:40:57.558 | 99.99th=[ 245] 00:40:57.558 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1702.40, stdev=538.43, samples=20 00:40:57.558 iops : min= 64, max= 480, avg=425.60, stdev=134.61, samples=20 00:40:57.558 lat (msec) : 10=0.16%, 20=0.21%, 50=96.63%, 100=0.37%, 250=2.62% 00:40:57.558 cpu : usr=96.60%, sys=2.06%, ctx=61, majf=0, minf=33 00:40:57.558 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046586: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=426, BW=1705KiB/s (1746kB/s)(16.7MiB/10021msec) 00:40:57.558 slat (usec): min=7, max=247, avg=41.54, stdev=24.87 00:40:57.558 clat (msec): min=21, max=190, avg=37.22, stdev=23.10 00:40:57.558 lat (msec): min=21, max=190, avg=37.26, stdev=23.10 00:40:57.558 clat percentiles (msec): 00:40:57.558 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.558 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:40:57.558 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.558 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 190], 00:40:57.558 | 99.99th=[ 192] 00:40:57.558 bw ( KiB/s): min= 384, max= 2032, per=4.16%, avg=1702.40, stdev=536.08, samples=20 00:40:57.558 iops : min= 96, max= 508, avg=425.60, stdev=134.02, samples=20 00:40:57.558 lat (msec) : 50=96.63%, 100=0.37%, 250=3.00% 00:40:57.558 cpu : usr=92.26%, sys=3.90%, ctx=181, majf=0, minf=50 00:40:57.558 IO depths : 1=1.1%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:40:57.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046587: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10009msec) 00:40:57.558 slat (nsec): min=8057, max=76367, avg=32217.97, stdev=12243.62 00:40:57.558 clat (msec): min=10, max=294, avg=37.18, stdev=26.86 00:40:57.558 lat (msec): min=10, max=294, avg=37.21, stdev=26.87 00:40:57.558 clat percentiles (msec): 00:40:57.558 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.558 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.558 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.558 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 296], 99.95th=[ 296], 00:40:57.558 | 99.99th=[ 296] 00:40:57.558 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1702.55, stdev=555.83, samples=20 00:40:57.558 iops : min= 64, max= 512, avg=425.60, stdev=138.94, samples=20 00:40:57.558 lat (msec) : 20=0.37%, 50=97.00%, 250=2.25%, 500=0.37% 00:40:57.558 cpu : usr=97.67%, sys=1.72%, ctx=103, majf=0, minf=31 00:40:57.558 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:57.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046588: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=425, BW=1702KiB/s (1743kB/s)(16.6MiB/10004msec) 00:40:57.558 slat (nsec): min=8291, max=69668, avg=30656.46, stdev=10993.42 00:40:57.558 clat (msec): min=21, max=263, avg=37.34, stdev=25.40 00:40:57.558 lat (msec): min=21, max=263, avg=37.37, stdev=25.40 00:40:57.558 clat percentiles (msec): 00:40:57.558 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.558 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:40:57.558 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.558 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 249], 99.95th=[ 249], 00:40:57.558 | 99.99th=[ 264] 00:40:57.558 bw ( KiB/s): min= 256, max= 1920, per=4.12%, avg=1684.21, stdev=548.50, samples=19 00:40:57.558 iops : min= 64, max= 480, avg=421.05, stdev=137.13, samples=19 00:40:57.558 lat (msec) : 50=97.04%, 100=0.33%, 250=2.58%, 500=0.05% 00:40:57.558 cpu : usr=95.06%, sys=2.87%, ctx=88, majf=0, minf=33 00:40:57.558 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.558 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.558 filename0: (groupid=0, jobs=1): err= 0: pid=4046589: Sat Nov 2 11:52:56 2024 00:40:57.558 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10008msec) 00:40:57.559 slat (usec): min=8, max=138, avg=41.80, stdev=17.38 00:40:57.559 clat (msec): min=10, max=293, avg=37.12, stdev=26.93 00:40:57.559 lat (msec): min=10, max=293, avg=37.16, stdev=26.93 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.559 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 292], 99.95th=[ 292], 00:40:57.559 | 99.99th=[ 292] 00:40:57.559 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1702.55, stdev=555.83, samples=20 00:40:57.559 iops : min= 64, max= 512, avg=425.60, stdev=138.94, samples=20 00:40:57.559 lat (msec) : 20=0.37%, 50=97.00%, 250=2.25%, 500=0.37% 00:40:57.559 cpu : usr=96.58%, sys=2.15%, ctx=60, majf=0, minf=25 00:40:57.559 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename0: (groupid=0, jobs=1): err= 0: pid=4046590: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=429, BW=1718KiB/s (1759kB/s)(16.8MiB/10021msec) 00:40:57.559 slat (usec): min=4, max=104, avg=26.20, stdev=17.36 00:40:57.559 clat (msec): min=22, max=247, avg=37.04, stdev=21.76 00:40:57.559 lat (msec): min=22, max=247, avg=37.06, stdev=21.76 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.559 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 182], 00:40:57.559 | 99.99th=[ 247] 00:40:57.559 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1715.20, stdev=507.60, samples=20 00:40:57.559 iops : min= 96, max= 512, avg=428.80, stdev=126.90, samples=20 00:40:57.559 lat (msec) : 50=96.49%, 100=0.58%, 250=2.93% 00:40:57.559 cpu : usr=98.35%, sys=1.25%, ctx=22, majf=0, minf=30 00:40:57.559 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename1: (groupid=0, jobs=1): err= 0: pid=4046591: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=426, BW=1706KiB/s (1747kB/s)(16.7MiB/10008msec) 00:40:57.559 slat (usec): min=7, max=119, avg=28.51, stdev=21.69 00:40:57.559 clat (msec): min=10, max=293, avg=37.39, stdev=26.91 00:40:57.559 lat (msec): min=10, max=293, avg=37.42, stdev=26.91 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 39], 00:40:57.559 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 292], 99.95th=[ 292], 00:40:57.559 | 99.99th=[ 292] 00:40:57.559 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1703.35, stdev=555.18, samples=20 00:40:57.559 iops : min= 64, max= 512, avg=425.80, stdev=138.78, samples=20 00:40:57.559 lat (msec) : 20=0.56%, 50=96.65%, 100=0.16%, 250=2.25%, 500=0.37% 00:40:57.559 cpu : usr=95.11%, sys=2.70%, ctx=255, majf=0, minf=35 00:40:57.559 IO depths : 1=0.2%, 2=0.7%, 4=2.5%, 8=78.9%, 16=17.6%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=89.8%, 8=9.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename1: (groupid=0, jobs=1): err= 0: pid=4046592: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=426, BW=1708KiB/s (1749kB/s)(16.7MiB/10021msec) 00:40:57.559 slat (nsec): min=7964, max=83921, avg=20719.58, stdev=14374.80 00:40:57.559 clat (msec): min=21, max=247, avg=37.32, stdev=24.65 00:40:57.559 lat (msec): min=21, max=247, avg=37.34, stdev=24.65 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.559 | 99.00th=[ 178], 99.50th=[ 197], 99.90th=[ 232], 99.95th=[ 234], 00:40:57.559 | 99.99th=[ 249] 00:40:57.559 bw ( KiB/s): min= 256, max= 2096, per=4.17%, avg=1704.80, stdev=555.89, samples=20 00:40:57.559 iops : min= 64, max= 524, avg=426.20, stdev=138.97, samples=20 00:40:57.559 lat (msec) : 50=96.96%, 100=0.09%, 250=2.95% 00:40:57.559 cpu : usr=98.26%, sys=1.32%, ctx=20, majf=0, minf=42 00:40:57.559 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename1: (groupid=0, jobs=1): err= 0: pid=4046593: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=426, BW=1705KiB/s (1746kB/s)(16.7MiB/10020msec) 00:40:57.559 slat (usec): min=8, max=173, avg=37.51, stdev=16.44 00:40:57.559 clat (msec): min=21, max=263, avg=37.19, stdev=23.50 00:40:57.559 lat (msec): min=21, max=263, avg=37.23, stdev=23.50 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.559 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 209], 99.95th=[ 232], 00:40:57.559 | 99.99th=[ 264] 00:40:57.559 bw ( KiB/s): min= 368, max= 2048, per=4.16%, avg=1702.40, stdev=536.88, samples=20 00:40:57.559 iops : min= 92, max= 512, avg=425.60, stdev=134.22, samples=20 00:40:57.559 lat (msec) : 50=96.25%, 100=0.75%, 250=2.95%, 500=0.05% 00:40:57.559 cpu : usr=89.61%, sys=4.75%, ctx=261, majf=0, minf=34 00:40:57.559 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename1: (groupid=0, jobs=1): err= 0: pid=4046594: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=430, BW=1722KiB/s (1764kB/s)(16.9MiB/10023msec) 00:40:57.559 slat (nsec): min=5646, max=82048, avg=34271.40, stdev=11959.30 00:40:57.559 clat (msec): min=21, max=201, avg=36.86, stdev=19.67 00:40:57.559 lat (msec): min=21, max=201, avg=36.90, stdev=19.67 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 38], 00:40:57.559 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 203], 00:40:57.559 | 99.99th=[ 203] 00:40:57.559 bw ( KiB/s): min= 448, max= 1920, per=4.21%, avg=1720.00, stdev=490.12, samples=20 00:40:57.559 iops : min= 112, max= 480, avg=430.00, stdev=122.53, samples=20 00:40:57.559 lat (msec) : 50=95.97%, 100=1.02%, 250=3.01% 00:40:57.559 cpu : usr=97.53%, sys=1.73%, ctx=74, majf=0, minf=51 00:40:57.559 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename1: (groupid=0, jobs=1): err= 0: pid=4046595: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10008msec) 00:40:57.559 slat (usec): min=8, max=136, avg=36.36, stdev=15.11 00:40:57.559 clat (msec): min=10, max=293, avg=37.17, stdev=26.97 00:40:57.559 lat (msec): min=10, max=293, avg=37.20, stdev=26.97 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.559 | 99.00th=[ 182], 99.50th=[ 228], 99.90th=[ 292], 99.95th=[ 292], 00:40:57.559 | 99.99th=[ 292] 00:40:57.559 bw ( KiB/s): min= 240, max= 2048, per=4.16%, avg=1702.55, stdev=556.05, samples=20 00:40:57.559 iops : min= 60, max= 512, avg=425.60, stdev=139.00, samples=20 00:40:57.559 lat (msec) : 20=0.37%, 50=97.00%, 250=2.25%, 500=0.37% 00:40:57.559 cpu : usr=97.62%, sys=1.56%, ctx=50, majf=0, minf=32 00:40:57.559 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.559 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.559 filename1: (groupid=0, jobs=1): err= 0: pid=4046597: Sat Nov 2 11:52:56 2024 00:40:57.559 read: IOPS=425, BW=1702KiB/s (1743kB/s)(16.6MiB/10003msec) 00:40:57.559 slat (usec): min=7, max=102, avg=34.53, stdev=12.45 00:40:57.559 clat (msec): min=22, max=299, avg=37.30, stdev=25.57 00:40:57.559 lat (msec): min=22, max=299, avg=37.33, stdev=25.57 00:40:57.559 clat percentiles (msec): 00:40:57.559 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.559 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.559 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.559 | 99.00th=[ 176], 99.50th=[ 230], 99.90th=[ 249], 99.95th=[ 264], 00:40:57.559 | 99.99th=[ 300] 00:40:57.559 bw ( KiB/s): min= 256, max= 1920, per=4.12%, avg=1684.21, stdev=548.50, samples=19 00:40:57.559 iops : min= 64, max= 480, avg=421.05, stdev=137.13, samples=19 00:40:57.559 lat (msec) : 50=97.04%, 100=0.33%, 250=2.54%, 500=0.09% 00:40:57.560 cpu : usr=97.95%, sys=1.49%, ctx=62, majf=0, minf=29 00:40:57.560 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename1: (groupid=0, jobs=1): err= 0: pid=4046598: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10008msec) 00:40:57.560 slat (usec): min=7, max=107, avg=39.63, stdev=16.63 00:40:57.560 clat (msec): min=31, max=238, avg=37.27, stdev=24.60 00:40:57.560 lat (msec): min=31, max=238, avg=37.31, stdev=24.60 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.560 | 99.00th=[ 178], 99.50th=[ 197], 99.90th=[ 232], 99.95th=[ 234], 00:40:57.560 | 99.99th=[ 239] 00:40:57.560 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1696.00, stdev=552.31, samples=20 00:40:57.560 iops : min= 64, max= 512, avg=424.00, stdev=138.08, samples=20 00:40:57.560 lat (msec) : 50=96.99%, 250=3.01% 00:40:57.560 cpu : usr=96.84%, sys=1.99%, ctx=56, majf=0, minf=35 00:40:57.560 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename1: (groupid=0, jobs=1): err= 0: pid=4046599: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=425, BW=1702KiB/s (1743kB/s)(16.6MiB/10004msec) 00:40:57.560 slat (nsec): min=7010, max=78153, avg=33629.13, stdev=10564.21 00:40:57.560 clat (msec): min=22, max=300, avg=37.32, stdev=27.18 00:40:57.560 lat (msec): min=22, max=300, avg=37.35, stdev=27.18 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.560 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 300], 99.95th=[ 300], 00:40:57.560 | 99.99th=[ 300] 00:40:57.560 bw ( KiB/s): min= 240, max= 2048, per=4.12%, avg=1684.21, stdev=565.48, samples=19 00:40:57.560 iops : min= 60, max= 512, avg=421.05, stdev=141.37, samples=19 00:40:57.560 lat (msec) : 50=97.37%, 250=2.26%, 500=0.38% 00:40:57.560 cpu : usr=97.29%, sys=2.03%, ctx=86, majf=0, minf=28 00:40:57.560 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename2: (groupid=0, jobs=1): err= 0: pid=4046600: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10009msec) 00:40:57.560 slat (usec): min=8, max=112, avg=41.77, stdev=18.58 00:40:57.560 clat (msec): min=10, max=293, avg=37.13, stdev=26.89 00:40:57.560 lat (msec): min=10, max=293, avg=37.17, stdev=26.89 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.560 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 292], 99.95th=[ 292], 00:40:57.560 | 99.99th=[ 292] 00:40:57.560 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1702.55, stdev=556.25, samples=20 00:40:57.560 iops : min= 64, max= 512, avg=425.60, stdev=139.05, samples=20 00:40:57.560 lat (msec) : 20=0.37%, 50=97.00%, 250=2.25%, 500=0.37% 00:40:57.560 cpu : usr=92.18%, sys=3.92%, ctx=357, majf=0, minf=36 00:40:57.560 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename2: (groupid=0, jobs=1): err= 0: pid=4046601: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10008msec) 00:40:57.560 slat (usec): min=8, max=121, avg=44.03, stdev=22.41 00:40:57.560 clat (msec): min=10, max=350, avg=37.10, stdev=27.19 00:40:57.560 lat (msec): min=10, max=350, avg=37.14, stdev=27.19 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 43], 00:40:57.560 | 99.00th=[ 182], 99.50th=[ 199], 99.90th=[ 292], 99.95th=[ 292], 00:40:57.560 | 99.99th=[ 351] 00:40:57.560 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1702.55, stdev=555.54, samples=20 00:40:57.560 iops : min= 64, max= 512, avg=425.60, stdev=138.87, samples=20 00:40:57.560 lat (msec) : 20=0.37%, 50=97.00%, 250=2.25%, 500=0.37% 00:40:57.560 cpu : usr=94.90%, sys=2.79%, ctx=333, majf=0, minf=33 00:40:57.560 IO depths : 1=4.9%, 2=10.6%, 4=23.2%, 8=53.7%, 16=7.6%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename2: (groupid=0, jobs=1): err= 0: pid=4046602: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=427, BW=1710KiB/s (1751kB/s)(16.8MiB/10031msec) 00:40:57.560 slat (usec): min=6, max=112, avg=33.54, stdev=19.81 00:40:57.560 clat (msec): min=30, max=180, avg=37.12, stdev=22.30 00:40:57.560 lat (msec): min=30, max=180, avg=37.15, stdev=22.30 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.560 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 182], 00:40:57.560 | 99.99th=[ 182] 00:40:57.560 bw ( KiB/s): min= 384, max= 2048, per=4.18%, avg=1708.85, stdev=522.34, samples=20 00:40:57.560 iops : min= 96, max= 512, avg=427.20, stdev=130.62, samples=20 00:40:57.560 lat (msec) : 50=96.27%, 100=0.75%, 250=2.99% 00:40:57.560 cpu : usr=97.09%, sys=2.01%, ctx=160, majf=0, minf=51 00:40:57.560 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename2: (groupid=0, jobs=1): err= 0: pid=4046603: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=425, BW=1701KiB/s (1742kB/s)(16.6MiB/10008msec) 00:40:57.560 slat (usec): min=8, max=108, avg=36.16, stdev=13.89 00:40:57.560 clat (msec): min=22, max=247, avg=37.30, stdev=24.68 00:40:57.560 lat (msec): min=22, max=247, avg=37.34, stdev=24.67 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.560 | 99.00th=[ 178], 99.50th=[ 197], 99.90th=[ 232], 99.95th=[ 234], 00:40:57.560 | 99.99th=[ 249] 00:40:57.560 bw ( KiB/s): min= 256, max= 2048, per=4.15%, avg=1696.00, stdev=552.31, samples=20 00:40:57.560 iops : min= 64, max= 512, avg=424.00, stdev=138.08, samples=20 00:40:57.560 lat (msec) : 50=96.99%, 100=0.05%, 250=2.96% 00:40:57.560 cpu : usr=97.95%, sys=1.46%, ctx=26, majf=0, minf=26 00:40:57.560 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename2: (groupid=0, jobs=1): err= 0: pid=4046604: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=424, BW=1700KiB/s (1740kB/s)(16.6MiB/10017msec) 00:40:57.560 slat (usec): min=6, max=117, avg=44.42, stdev=20.38 00:40:57.560 clat (msec): min=23, max=238, avg=37.23, stdev=24.52 00:40:57.560 lat (msec): min=23, max=238, avg=37.27, stdev=24.52 00:40:57.560 clat percentiles (msec): 00:40:57.560 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.560 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.560 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.560 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 232], 99.95th=[ 234], 00:40:57.560 | 99.99th=[ 239] 00:40:57.560 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1701.60, stdev=542.54, samples=20 00:40:57.560 iops : min= 64, max= 512, avg=425.40, stdev=135.64, samples=20 00:40:57.560 lat (msec) : 50=96.99%, 250=3.01% 00:40:57.560 cpu : usr=96.12%, sys=2.37%, ctx=137, majf=0, minf=31 00:40:57.560 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:40:57.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.560 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.560 filename2: (groupid=0, jobs=1): err= 0: pid=4046605: Sat Nov 2 11:52:56 2024 00:40:57.560 read: IOPS=426, BW=1705KiB/s (1746kB/s)(16.7MiB/10020msec) 00:40:57.560 slat (usec): min=8, max=109, avg=39.53, stdev=23.16 00:40:57.560 clat (msec): min=21, max=208, avg=37.19, stdev=23.16 00:40:57.560 lat (msec): min=21, max=208, avg=37.23, stdev=23.16 00:40:57.560 clat percentiles (msec): 00:40:57.561 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.561 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:40:57.561 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.561 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 192], 00:40:57.561 | 99.99th=[ 209] 00:40:57.561 bw ( KiB/s): min= 384, max= 2048, per=4.16%, avg=1702.40, stdev=538.43, samples=20 00:40:57.561 iops : min= 96, max= 512, avg=425.60, stdev=134.61, samples=20 00:40:57.561 lat (msec) : 50=96.58%, 100=0.42%, 250=3.00% 00:40:57.561 cpu : usr=96.76%, sys=2.00%, ctx=74, majf=0, minf=26 00:40:57.561 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:57.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.561 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.561 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.561 filename2: (groupid=0, jobs=1): err= 0: pid=4046606: Sat Nov 2 11:52:56 2024 00:40:57.561 read: IOPS=427, BW=1710KiB/s (1751kB/s)(16.8MiB/10032msec) 00:40:57.561 slat (usec): min=4, max=204, avg=25.45, stdev=24.50 00:40:57.561 clat (msec): min=21, max=235, avg=37.21, stdev=22.90 00:40:57.561 lat (msec): min=21, max=235, avg=37.24, stdev=22.90 00:40:57.561 clat percentiles (msec): 00:40:57.561 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.561 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:40:57.561 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:40:57.561 | 99.00th=[ 174], 99.50th=[ 176], 99.90th=[ 220], 99.95th=[ 232], 00:40:57.561 | 99.99th=[ 236] 00:40:57.561 bw ( KiB/s): min= 384, max= 1920, per=4.18%, avg=1708.85, stdev=517.36, samples=20 00:40:57.561 iops : min= 96, max= 480, avg=427.20, stdev=129.37, samples=20 00:40:57.561 lat (msec) : 50=96.27%, 100=1.07%, 250=2.66% 00:40:57.561 cpu : usr=94.84%, sys=3.05%, ctx=185, majf=0, minf=30 00:40:57.561 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:57.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.561 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.561 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.561 filename2: (groupid=0, jobs=1): err= 0: pid=4046607: Sat Nov 2 11:52:56 2024 00:40:57.561 read: IOPS=425, BW=1702KiB/s (1743kB/s)(16.6MiB/10001msec) 00:40:57.561 slat (usec): min=10, max=118, avg=33.18, stdev=11.35 00:40:57.561 clat (msec): min=22, max=296, avg=37.30, stdev=26.98 00:40:57.561 lat (msec): min=22, max=296, avg=37.33, stdev=26.98 00:40:57.561 clat percentiles (msec): 00:40:57.561 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:40:57.561 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:40:57.561 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:40:57.561 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 296], 99.95th=[ 296], 00:40:57.561 | 99.99th=[ 296] 00:40:57.561 bw ( KiB/s): min= 256, max= 2048, per=4.12%, avg=1684.21, stdev=564.85, samples=19 00:40:57.561 iops : min= 64, max= 512, avg=421.05, stdev=141.21, samples=19 00:40:57.561 lat (msec) : 50=97.37%, 250=2.26%, 500=0.38% 00:40:57.561 cpu : usr=97.68%, sys=1.63%, ctx=55, majf=0, minf=30 00:40:57.561 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:57.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.561 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.561 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:57.561 00:40:57.561 Run status group 0 (all jobs): 00:40:57.561 READ: bw=39.9MiB/s (41.9MB/s), 1700KiB/s-1722KiB/s (1740kB/s-1764kB/s), io=400MiB (420MB), run=10001-10032msec 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 bdev_null0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.561 [2024-11-02 11:52:56.571321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:57.561 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.562 bdev_null1 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:57.562 { 00:40:57.562 "params": { 00:40:57.562 "name": "Nvme$subsystem", 00:40:57.562 "trtype": "$TEST_TRANSPORT", 00:40:57.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:57.562 "adrfam": "ipv4", 00:40:57.562 "trsvcid": "$NVMF_PORT", 00:40:57.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:57.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:57.562 "hdgst": ${hdgst:-false}, 00:40:57.562 "ddgst": ${ddgst:-false} 00:40:57.562 }, 00:40:57.562 "method": "bdev_nvme_attach_controller" 00:40:57.562 } 00:40:57.562 EOF 00:40:57.562 )") 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:57.562 { 00:40:57.562 "params": { 00:40:57.562 "name": "Nvme$subsystem", 00:40:57.562 "trtype": "$TEST_TRANSPORT", 00:40:57.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:57.562 "adrfam": "ipv4", 00:40:57.562 "trsvcid": "$NVMF_PORT", 00:40:57.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:57.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:57.562 "hdgst": ${hdgst:-false}, 00:40:57.562 "ddgst": ${ddgst:-false} 00:40:57.562 }, 00:40:57.562 "method": "bdev_nvme_attach_controller" 00:40:57.562 } 00:40:57.562 EOF 00:40:57.562 )") 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:57.562 "params": { 00:40:57.562 "name": "Nvme0", 00:40:57.562 "trtype": "tcp", 00:40:57.562 "traddr": "10.0.0.2", 00:40:57.562 "adrfam": "ipv4", 00:40:57.562 "trsvcid": "4420", 00:40:57.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:57.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:57.562 "hdgst": false, 00:40:57.562 "ddgst": false 00:40:57.562 }, 00:40:57.562 "method": "bdev_nvme_attach_controller" 00:40:57.562 },{ 00:40:57.562 "params": { 00:40:57.562 "name": "Nvme1", 00:40:57.562 "trtype": "tcp", 00:40:57.562 "traddr": "10.0.0.2", 00:40:57.562 "adrfam": "ipv4", 00:40:57.562 "trsvcid": "4420", 00:40:57.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:57.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:57.562 "hdgst": false, 00:40:57.562 "ddgst": false 00:40:57.562 }, 00:40:57.562 "method": "bdev_nvme_attach_controller" 00:40:57.562 }' 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:57.562 11:52:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:57.562 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:57.562 ... 00:40:57.562 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:57.562 ... 00:40:57.562 fio-3.35 00:40:57.562 Starting 4 threads 00:41:02.830 00:41:02.830 filename0: (groupid=0, jobs=1): err= 0: pid=4047869: Sat Nov 2 11:53:02 2024 00:41:02.830 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5004msec) 00:41:02.830 slat (nsec): min=4693, max=67616, avg=14085.98, stdev=7173.39 00:41:02.830 clat (usec): min=1001, max=10595, avg=4126.21, stdev=638.45 00:41:02.830 lat (usec): min=1010, max=10609, avg=4140.30, stdev=638.46 00:41:02.830 clat percentiles (usec): 00:41:02.830 | 1.00th=[ 2769], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3785], 00:41:02.830 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:41:02.830 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5538], 00:41:02.830 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7767], 99.95th=[10421], 00:41:02.830 | 99.99th=[10552] 00:41:02.830 bw ( KiB/s): min=14800, max=15904, per=25.21%, avg=15344.10, stdev=292.99, samples=10 00:41:02.830 iops : min= 1850, max= 1988, avg=1918.00, stdev=36.64, samples=10 00:41:02.830 lat (msec) : 2=0.15%, 4=36.16%, 10=63.65%, 20=0.05% 00:41:02.830 cpu : usr=95.10%, sys=4.30%, ctx=20, majf=0, minf=9 00:41:02.830 IO depths : 1=0.1%, 2=8.5%, 4=64.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 issued rwts: total=9592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:02.830 filename0: (groupid=0, jobs=1): err= 0: pid=4047870: Sat Nov 2 11:53:02 2024 00:41:02.830 read: IOPS=1912, BW=14.9MiB/s (15.7MB/s)(74.8MiB/5004msec) 00:41:02.830 slat (nsec): min=3860, max=60387, avg=16586.71, stdev=7685.77 00:41:02.830 clat (usec): min=810, max=9158, avg=4127.42, stdev=563.37 00:41:02.830 lat (usec): min=830, max=9181, avg=4144.01, stdev=563.29 00:41:02.830 clat percentiles (usec): 00:41:02.830 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3818], 00:41:02.830 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:41:02.830 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5276], 00:41:02.830 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 7111], 00:41:02.830 | 99.99th=[ 9110] 00:41:02.830 bw ( KiB/s): min=15040, max=15600, per=25.13%, avg=15296.00, stdev=176.08, samples=10 00:41:02.830 iops : min= 1880, max= 1950, avg=1912.00, stdev=22.01, samples=10 00:41:02.830 lat (usec) : 1000=0.03% 00:41:02.830 lat (msec) : 2=0.14%, 4=34.72%, 10=65.11% 00:41:02.830 cpu : usr=95.14%, sys=4.26%, ctx=34, majf=0, minf=9 00:41:02.830 IO depths : 1=0.1%, 2=11.9%, 4=60.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 issued rwts: total=9568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:02.830 filename1: (groupid=0, jobs=1): err= 0: pid=4047871: Sat Nov 2 11:53:02 2024 00:41:02.830 read: IOPS=1898, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5004msec) 00:41:02.830 slat (nsec): min=5270, max=98935, avg=17492.61, stdev=8754.81 00:41:02.830 clat (usec): min=802, max=9325, avg=4156.05, stdev=640.61 00:41:02.830 lat (usec): min=820, max=9354, avg=4173.54, stdev=640.60 00:41:02.830 clat percentiles (usec): 00:41:02.830 | 1.00th=[ 2802], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3785], 00:41:02.830 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:41:02.830 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5538], 00:41:02.830 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7439], 00:41:02.830 | 99.99th=[ 9372] 00:41:02.830 bw ( KiB/s): min=14672, max=15776, per=24.96%, avg=15190.40, stdev=376.08, samples=10 00:41:02.830 iops : min= 1834, max= 1972, avg=1898.80, stdev=47.01, samples=10 00:41:02.830 lat (usec) : 1000=0.03% 00:41:02.830 lat (msec) : 2=0.19%, 4=35.48%, 10=64.30% 00:41:02.830 cpu : usr=94.60%, sys=4.84%, ctx=32, majf=0, minf=9 00:41:02.830 IO depths : 1=0.1%, 2=8.2%, 4=64.2%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 issued rwts: total=9502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:02.830 filename1: (groupid=0, jobs=1): err= 0: pid=4047872: Sat Nov 2 11:53:02 2024 00:41:02.830 read: IOPS=1880, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5003msec) 00:41:02.830 slat (nsec): min=4129, max=67627, avg=16342.37, stdev=9151.11 00:41:02.830 clat (usec): min=774, max=9976, avg=4199.15, stdev=653.80 00:41:02.830 lat (usec): min=788, max=9985, avg=4215.49, stdev=653.36 00:41:02.830 clat percentiles (usec): 00:41:02.830 | 1.00th=[ 2704], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3851], 00:41:02.830 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:41:02.830 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5669], 00:41:02.830 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 8848], 00:41:02.830 | 99.99th=[10028] 00:41:02.830 bw ( KiB/s): min=14432, max=15440, per=24.72%, avg=15043.00, stdev=262.62, samples=10 00:41:02.830 iops : min= 1804, max= 1930, avg=1880.30, stdev=32.81, samples=10 00:41:02.830 lat (usec) : 1000=0.04% 00:41:02.830 lat (msec) : 2=0.20%, 4=31.25%, 10=68.51% 00:41:02.830 cpu : usr=95.60%, sys=3.88%, ctx=12, majf=0, minf=9 00:41:02.830 IO depths : 1=0.2%, 2=9.9%, 4=62.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.830 issued rwts: total=9408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.830 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:02.830 00:41:02.830 Run status group 0 (all jobs): 00:41:02.830 READ: bw=59.4MiB/s (62.3MB/s), 14.7MiB/s-15.0MiB/s (15.4MB/s-15.7MB/s), io=297MiB (312MB), run=5003-5004msec 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.830 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 00:41:02.831 real 0m24.164s 00:41:02.831 user 4m28.107s 00:41:02.831 sys 0m8.306s 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 ************************************ 00:41:02.831 END TEST fio_dif_rand_params 00:41:02.831 ************************************ 00:41:02.831 11:53:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:02.831 11:53:02 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:02.831 11:53:02 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 ************************************ 00:41:02.831 START TEST fio_dif_digest 00:41:02.831 ************************************ 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 bdev_null0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:02.831 [2024-11-02 11:53:02.937496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.831 { 00:41:02.831 "params": { 00:41:02.831 "name": "Nvme$subsystem", 00:41:02.831 "trtype": "$TEST_TRANSPORT", 00:41:02.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.831 "adrfam": "ipv4", 00:41:02.831 "trsvcid": "$NVMF_PORT", 00:41:02.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.831 "hdgst": ${hdgst:-false}, 00:41:02.831 "ddgst": ${ddgst:-false} 00:41:02.831 }, 00:41:02.831 "method": "bdev_nvme_attach_controller" 00:41:02.831 } 00:41:02.831 EOF 00:41:02.831 )") 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:02.831 "params": { 00:41:02.831 "name": "Nvme0", 00:41:02.831 "trtype": "tcp", 00:41:02.831 "traddr": "10.0.0.2", 00:41:02.831 "adrfam": "ipv4", 00:41:02.831 "trsvcid": "4420", 00:41:02.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:02.831 "hdgst": true, 00:41:02.831 "ddgst": true 00:41:02.831 }, 00:41:02.831 "method": "bdev_nvme_attach_controller" 00:41:02.831 }' 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:02.831 11:53:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.831 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:02.831 ... 00:41:02.831 fio-3.35 00:41:02.831 Starting 3 threads 00:41:15.041 00:41:15.041 filename0: (groupid=0, jobs=1): err= 0: pid=4048760: Sat Nov 2 11:53:13 2024 00:41:15.041 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(241MiB/10050msec) 00:41:15.041 slat (nsec): min=5901, max=77273, avg=15324.11, stdev=4601.65 00:41:15.041 clat (usec): min=7434, max=59700, avg=15628.33, stdev=2517.18 00:41:15.041 lat (usec): min=7440, max=59713, avg=15643.66, stdev=2517.49 00:41:15.041 clat percentiles (usec): 00:41:15.041 | 1.00th=[10683], 5.00th=[13304], 10.00th=[13960], 20.00th=[14484], 00:41:15.041 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:41:15.041 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:41:15.041 | 99.00th=[18482], 99.50th=[19006], 99.90th=[57934], 99.95th=[59507], 00:41:15.041 | 99.99th=[59507] 00:41:15.041 bw ( KiB/s): min=22016, max=26880, per=34.31%, avg=24588.80, stdev=1052.16, samples=20 00:41:15.041 iops : min= 172, max= 210, avg=192.10, stdev= 8.22, samples=20 00:41:15.041 lat (msec) : 10=0.26%, 20=99.48%, 100=0.26% 00:41:15.041 cpu : usr=93.27%, sys=6.25%, ctx=23, majf=0, minf=149 00:41:15.041 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.041 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.041 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:15.041 filename0: (groupid=0, jobs=1): err= 0: pid=4048761: Sat Nov 2 11:53:13 2024 00:41:15.041 read: IOPS=185, BW=23.1MiB/s (24.2MB/s)(232MiB/10048msec) 00:41:15.041 slat (nsec): min=5040, max=81238, avg=18715.20, stdev=5987.18 00:41:15.041 clat (usec): min=9020, max=57661, avg=16167.84, stdev=3679.94 00:41:15.041 lat (usec): min=9042, max=57683, avg=16186.55, stdev=3680.17 00:41:15.041 clat percentiles (usec): 00:41:15.041 | 1.00th=[11731], 5.00th=[13698], 10.00th=[14222], 20.00th=[14877], 00:41:15.041 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], 00:41:15.041 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:41:15.041 | 99.00th=[19792], 99.50th=[54789], 99.90th=[56886], 99.95th=[57410], 00:41:15.041 | 99.99th=[57410] 00:41:15.041 bw ( KiB/s): min=21504, max=25344, per=33.17%, avg=23769.60, stdev=1099.61, samples=20 00:41:15.041 iops : min= 168, max= 198, avg=185.70, stdev= 8.59, samples=20 00:41:15.041 lat (msec) : 10=0.11%, 20=98.92%, 50=0.27%, 100=0.70% 00:41:15.041 cpu : usr=94.14%, sys=5.35%, ctx=27, majf=0, minf=167 00:41:15.041 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.041 issued rwts: total=1859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.041 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:15.041 filename0: (groupid=0, jobs=1): err= 0: pid=4048762: Sat Nov 2 11:53:13 2024 00:41:15.041 read: IOPS=183, BW=22.9MiB/s (24.1MB/s)(231MiB/10049msec) 00:41:15.041 slat (nsec): min=5728, max=80020, avg=14908.66, stdev=4281.43 00:41:15.041 clat (usec): min=9001, max=59324, avg=16306.87, stdev=2532.93 00:41:15.041 lat (usec): min=9015, max=59336, avg=16321.78, stdev=2532.97 00:41:15.041 clat percentiles (usec): 00:41:15.041 | 1.00th=[11076], 5.00th=[13960], 10.00th=[14615], 20.00th=[15270], 00:41:15.041 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16319], 60.00th=[16581], 00:41:15.042 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:41:15.042 | 99.00th=[19530], 99.50th=[20055], 99.90th=[58983], 99.95th=[59507], 00:41:15.042 | 99.99th=[59507] 00:41:15.042 bw ( KiB/s): min=21504, max=24832, per=32.88%, avg=23562.40, stdev=958.99, samples=20 00:41:15.042 iops : min= 168, max= 194, avg=184.05, stdev= 7.49, samples=20 00:41:15.042 lat (msec) : 10=0.27%, 20=99.13%, 50=0.38%, 100=0.22% 00:41:15.042 cpu : usr=93.66%, sys=5.86%, ctx=34, majf=0, minf=205 00:41:15.042 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.042 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.042 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:15.042 00:41:15.042 Run status group 0 (all jobs): 00:41:15.042 READ: bw=70.0MiB/s (73.4MB/s), 22.9MiB/s-23.9MiB/s (24.1MB/s-25.1MB/s), io=703MiB (738MB), run=10048-10050msec 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.042 00:41:15.042 real 0m11.183s 00:41:15.042 user 0m29.469s 00:41:15.042 sys 0m2.042s 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:15.042 11:53:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:15.042 ************************************ 00:41:15.042 END TEST fio_dif_digest 00:41:15.042 ************************************ 00:41:15.042 11:53:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:15.042 11:53:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:15.042 rmmod nvme_tcp 00:41:15.042 rmmod nvme_fabrics 00:41:15.042 rmmod nvme_keyring 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4042699 ']' 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4042699 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 4042699 ']' 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 4042699 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4042699 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4042699' 00:41:15.042 killing process with pid 4042699 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@971 -- # kill 4042699 00:41:15.042 11:53:14 nvmf_dif -- common/autotest_common.sh@976 -- # wait 4042699 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:15.042 11:53:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:15.303 Waiting for block devices as requested 00:41:15.303 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:15.303 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:15.562 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:15.562 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:15.562 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:15.562 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:15.820 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:15.820 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:15.820 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:15.820 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:16.078 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:16.078 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:16.078 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:16.078 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:16.336 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:16.336 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:16.336 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:16.336 11:53:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:16.336 11:53:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:16.336 11:53:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.874 11:53:18 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:18.874 00:41:18.874 real 1m6.663s 00:41:18.874 user 6m25.111s 00:41:18.874 sys 0m19.388s 00:41:18.874 11:53:18 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:18.874 11:53:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:18.874 ************************************ 00:41:18.874 END TEST nvmf_dif 00:41:18.874 ************************************ 00:41:18.874 11:53:18 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:18.874 11:53:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:18.874 11:53:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:18.874 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:41:18.874 ************************************ 00:41:18.874 START TEST nvmf_abort_qd_sizes 00:41:18.874 ************************************ 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:18.874 * Looking for test storage... 00:41:18.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.874 --rc genhtml_branch_coverage=1 00:41:18.874 --rc genhtml_function_coverage=1 00:41:18.874 --rc genhtml_legend=1 00:41:18.874 --rc geninfo_all_blocks=1 00:41:18.874 --rc geninfo_unexecuted_blocks=1 00:41:18.874 00:41:18.874 ' 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.874 --rc genhtml_branch_coverage=1 00:41:18.874 --rc genhtml_function_coverage=1 00:41:18.874 --rc genhtml_legend=1 00:41:18.874 --rc geninfo_all_blocks=1 00:41:18.874 --rc geninfo_unexecuted_blocks=1 00:41:18.874 00:41:18.874 ' 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.874 --rc genhtml_branch_coverage=1 00:41:18.874 --rc genhtml_function_coverage=1 00:41:18.874 --rc genhtml_legend=1 00:41:18.874 --rc geninfo_all_blocks=1 00:41:18.874 --rc geninfo_unexecuted_blocks=1 00:41:18.874 00:41:18.874 ' 00:41:18.874 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.874 --rc genhtml_branch_coverage=1 00:41:18.874 --rc genhtml_function_coverage=1 00:41:18.874 --rc genhtml_legend=1 00:41:18.874 --rc geninfo_all_blocks=1 00:41:18.874 --rc geninfo_unexecuted_blocks=1 00:41:18.875 00:41:18.875 ' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:18.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:18.875 11:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:20.779 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:20.780 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:20.780 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:20.780 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:20.780 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:20.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:20.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:41:20.780 00:41:20.780 --- 10.0.0.2 ping statistics --- 00:41:20.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:20.780 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:20.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:20.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:41:20.780 00:41:20.780 --- 10.0.0.1 ping statistics --- 00:41:20.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:20.780 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:20.780 11:53:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:22.159 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:22.159 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:22.159 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:23.095 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=4053552 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 4053552 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 4053552 ']' 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:23.095 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:23.095 [2024-11-02 11:53:23.467710] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:41:23.095 [2024-11-02 11:53:23.467782] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.353 [2024-11-02 11:53:23.550647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:23.353 [2024-11-02 11:53:23.602943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.353 [2024-11-02 11:53:23.603005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.353 [2024-11-02 11:53:23.603031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:23.353 [2024-11-02 11:53:23.603052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:23.353 [2024-11-02 11:53:23.603069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.353 [2024-11-02 11:53:23.604809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.353 [2024-11-02 11:53:23.604859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:23.353 [2024-11-02 11:53:23.604975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:23.353 [2024-11-02 11:53:23.604979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.353 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:23.353 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:41:23.353 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:23.353 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:23.353 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:23.354 11:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:23.612 ************************************ 00:41:23.612 START TEST spdk_target_abort 00:41:23.612 ************************************ 00:41:23.612 11:53:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:41:23.612 11:53:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:23.612 11:53:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:23.612 11:53:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.612 11:53:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:26.901 spdk_targetn1 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:26.901 [2024-11-02 11:53:26.609287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:26.901 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:26.902 [2024-11-02 11:53:26.655665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:26.902 11:53:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:30.183 Initializing NVMe Controllers 00:41:30.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:30.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:30.183 Initialization complete. Launching workers. 00:41:30.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12067, failed: 0 00:41:30.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1405, failed to submit 10662 00:41:30.183 success 857, unsuccessful 548, failed 0 00:41:30.183 11:53:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:30.183 11:53:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:33.500 Initializing NVMe Controllers 00:41:33.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:33.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:33.500 Initialization complete. Launching workers. 00:41:33.500 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8538, failed: 0 00:41:33.500 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7301 00:41:33.500 success 308, unsuccessful 929, failed 0 00:41:33.500 11:53:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:33.500 11:53:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:36.786 Initializing NVMe Controllers 00:41:36.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:36.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:36.786 Initialization complete. Launching workers. 00:41:36.786 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31276, failed: 0 00:41:36.786 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2933, failed to submit 28343 00:41:36.786 success 545, unsuccessful 2388, failed 0 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.786 11:53:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4053552 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 4053552 ']' 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 4053552 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4053552 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4053552' 00:41:37.718 killing process with pid 4053552 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 4053552 00:41:37.718 11:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 4053552 00:41:37.976 00:41:37.976 real 0m14.375s 00:41:37.976 user 0m53.752s 00:41:37.976 sys 0m2.823s 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:37.976 ************************************ 00:41:37.976 END TEST spdk_target_abort 00:41:37.976 ************************************ 00:41:37.976 11:53:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:37.976 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:37.976 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:37.976 11:53:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:37.976 ************************************ 00:41:37.976 START TEST kernel_target_abort 00:41:37.976 ************************************ 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:37.976 11:53:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:39.351 Waiting for block devices as requested 00:41:39.351 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:39.351 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:39.351 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:39.351 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:39.608 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:39.608 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:39.608 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:39.608 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:39.608 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:39.868 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:39.868 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:39.868 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:39.868 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:40.128 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:40.128 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:40.128 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:40.128 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:40.388 No valid GPT data, bailing 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:40.388 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:41:40.648 00:41:40.648 Discovery Log Number of Records 2, Generation counter 2 00:41:40.648 =====Discovery Log Entry 0====== 00:41:40.648 trtype: tcp 00:41:40.648 adrfam: ipv4 00:41:40.648 subtype: current discovery subsystem 00:41:40.648 treq: not specified, sq flow control disable supported 00:41:40.648 portid: 1 00:41:40.648 trsvcid: 4420 00:41:40.648 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:40.648 traddr: 10.0.0.1 00:41:40.648 eflags: none 00:41:40.648 sectype: none 00:41:40.648 =====Discovery Log Entry 1====== 00:41:40.648 trtype: tcp 00:41:40.648 adrfam: ipv4 00:41:40.648 subtype: nvme subsystem 00:41:40.648 treq: not specified, sq flow control disable supported 00:41:40.648 portid: 1 00:41:40.648 trsvcid: 4420 00:41:40.648 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:40.648 traddr: 10.0.0.1 00:41:40.648 eflags: none 00:41:40.648 sectype: none 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:40.648 11:53:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:43.938 Initializing NVMe Controllers 00:41:43.938 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:43.938 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:43.938 Initialization complete. Launching workers. 00:41:43.938 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36386, failed: 0 00:41:43.938 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36386, failed to submit 0 00:41:43.938 success 0, unsuccessful 36386, failed 0 00:41:43.938 11:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:43.938 11:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:47.232 Initializing NVMe Controllers 00:41:47.232 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:47.232 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:47.232 Initialization complete. Launching workers. 00:41:47.232 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73103, failed: 0 00:41:47.232 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18430, failed to submit 54673 00:41:47.232 success 0, unsuccessful 18430, failed 0 00:41:47.232 11:53:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:47.232 11:53:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:50.522 Initializing NVMe Controllers 00:41:50.522 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:50.522 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:50.522 Initialization complete. Launching workers. 00:41:50.522 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63512, failed: 0 00:41:50.522 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15854, failed to submit 47658 00:41:50.522 success 0, unsuccessful 15854, failed 0 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:41:50.522 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:41:50.523 11:53:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:51.089 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:51.089 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:51.347 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:52.286 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:52.286 00:41:52.286 real 0m14.418s 00:41:52.286 user 0m5.506s 00:41:52.286 sys 0m3.466s 00:41:52.286 11:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:52.286 11:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:52.286 ************************************ 00:41:52.286 END TEST kernel_target_abort 00:41:52.286 ************************************ 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:52.286 rmmod nvme_tcp 00:41:52.286 rmmod nvme_fabrics 00:41:52.286 rmmod nvme_keyring 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 4053552 ']' 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 4053552 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 4053552 ']' 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 4053552 00:41:52.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4053552) - No such process 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 4053552 is not found' 00:41:52.286 Process with pid 4053552 is not found 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:52.286 11:53:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:53.665 Waiting for block devices as requested 00:41:53.665 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:53.665 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:53.665 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:53.924 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:53.924 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:53.924 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:53.924 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:54.183 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:54.183 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:54.183 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:54.183 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:54.442 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:54.442 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:54.442 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:54.442 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:54.702 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:54.702 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:54.702 11:53:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.246 11:53:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:57.246 00:41:57.246 real 0m38.311s 00:41:57.246 user 1m1.481s 00:41:57.246 sys 0m9.830s 00:41:57.246 11:53:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:57.246 11:53:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:57.246 ************************************ 00:41:57.246 END TEST nvmf_abort_qd_sizes 00:41:57.246 ************************************ 00:41:57.246 11:53:57 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:57.246 11:53:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:57.246 11:53:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:57.246 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:41:57.246 ************************************ 00:41:57.246 START TEST keyring_file 00:41:57.246 ************************************ 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:57.247 * Looking for test storage... 00:41:57.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.247 --rc genhtml_branch_coverage=1 00:41:57.247 --rc genhtml_function_coverage=1 00:41:57.247 --rc genhtml_legend=1 00:41:57.247 --rc geninfo_all_blocks=1 00:41:57.247 --rc geninfo_unexecuted_blocks=1 00:41:57.247 00:41:57.247 ' 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.247 --rc genhtml_branch_coverage=1 00:41:57.247 --rc genhtml_function_coverage=1 00:41:57.247 --rc genhtml_legend=1 00:41:57.247 --rc geninfo_all_blocks=1 00:41:57.247 --rc geninfo_unexecuted_blocks=1 00:41:57.247 00:41:57.247 ' 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.247 --rc genhtml_branch_coverage=1 00:41:57.247 --rc genhtml_function_coverage=1 00:41:57.247 --rc genhtml_legend=1 00:41:57.247 --rc geninfo_all_blocks=1 00:41:57.247 --rc geninfo_unexecuted_blocks=1 00:41:57.247 00:41:57.247 ' 00:41:57.247 11:53:57 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:57.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.247 --rc genhtml_branch_coverage=1 00:41:57.247 --rc genhtml_function_coverage=1 00:41:57.247 --rc genhtml_legend=1 00:41:57.247 --rc geninfo_all_blocks=1 00:41:57.247 --rc geninfo_unexecuted_blocks=1 00:41:57.247 00:41:57.247 ' 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:57.247 11:53:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:57.247 11:53:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:57.247 11:53:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.247 11:53:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.247 11:53:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.247 11:53:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:57.247 11:53:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:57.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:57.247 11:53:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:57.247 11:53:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:57.247 11:53:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:57.247 11:53:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CcCClJsoPs 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CcCClJsoPs 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CcCClJsoPs 00:41:57.248 11:53:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CcCClJsoPs 00:41:57.248 11:53:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d95B5Q97nc 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:57.248 11:53:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d95B5Q97nc 00:41:57.248 11:53:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d95B5Q97nc 00:41:57.248 11:53:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.d95B5Q97nc 00:41:57.248 11:53:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=4059313 00:41:57.248 11:53:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:57.248 11:53:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4059313 00:41:57.248 11:53:57 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 4059313 ']' 00:41:57.248 11:53:57 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:57.248 11:53:57 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:57.248 11:53:57 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:57.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:57.248 11:53:57 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:57.248 11:53:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:57.248 [2024-11-02 11:53:57.501738] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:41:57.248 [2024-11-02 11:53:57.501825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059313 ] 00:41:57.248 [2024-11-02 11:53:57.573661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.248 [2024-11-02 11:53:57.627073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.507 11:53:57 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:57.507 11:53:57 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:41:57.507 11:53:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:57.507 11:53:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.507 11:53:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:57.507 [2024-11-02 11:53:57.898384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:57.766 null0 00:41:57.766 [2024-11-02 11:53:57.930421] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:57.766 [2024-11-02 11:53:57.930891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.766 11:53:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:57.766 [2024-11-02 11:53:57.954466] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:57.766 request: 00:41:57.766 { 00:41:57.766 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:57.766 "secure_channel": false, 00:41:57.766 "listen_address": { 00:41:57.766 "trtype": "tcp", 00:41:57.766 "traddr": "127.0.0.1", 00:41:57.766 "trsvcid": "4420" 00:41:57.766 }, 00:41:57.766 "method": "nvmf_subsystem_add_listener", 00:41:57.766 "req_id": 1 00:41:57.766 } 00:41:57.766 Got JSON-RPC error response 00:41:57.766 response: 00:41:57.766 { 00:41:57.766 "code": -32602, 00:41:57.766 "message": "Invalid parameters" 00:41:57.766 } 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:57.766 11:53:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=4059328 00:41:57.766 11:53:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:57.766 11:53:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4059328 /var/tmp/bperf.sock 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 4059328 ']' 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:57.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:57.766 11:53:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:57.767 [2024-11-02 11:53:58.004981] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:41:57.767 [2024-11-02 11:53:58.005056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059328 ] 00:41:57.767 [2024-11-02 11:53:58.077918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.767 [2024-11-02 11:53:58.127329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:58.025 11:53:58 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:58.025 11:53:58 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:41:58.025 11:53:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:41:58.025 11:53:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:41:58.284 11:53:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d95B5Q97nc 00:41:58.284 11:53:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d95B5Q97nc 00:41:58.542 11:53:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:41:58.542 11:53:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:58.542 11:53:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:58.542 11:53:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:58.542 11:53:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:58.801 11:53:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.CcCClJsoPs == \/\t\m\p\/\t\m\p\.\C\c\C\C\l\J\s\o\P\s ]] 00:41:58.801 11:53:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:41:58.801 11:53:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:41:58.801 11:53:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:58.801 11:53:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:58.801 11:53:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.059 11:53:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.d95B5Q97nc == \/\t\m\p\/\t\m\p\.\d\9\5\B\5\Q\9\7\n\c ]] 00:41:59.059 11:53:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:41:59.059 11:53:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:59.059 11:53:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:59.059 11:53:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.059 11:53:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.059 11:53:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:59.318 11:53:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:59.318 11:53:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:41:59.318 11:53:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:59.318 11:53:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:59.318 11:53:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.318 11:53:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:59.318 11:53:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.575 11:53:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:41:59.575 11:53:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:59.575 11:53:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:59.832 [2024-11-02 11:54:00.156066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:59.833 nvme0n1 00:42:00.091 11:54:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:00.091 11:54:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:00.091 11:54:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:00.091 11:54:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:00.091 11:54:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:00.091 11:54:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.350 11:54:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:00.350 11:54:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:00.350 11:54:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:00.350 11:54:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:00.350 11:54:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:00.350 11:54:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.350 11:54:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:00.608 11:54:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:00.608 11:54:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:00.608 Running I/O for 1 seconds... 00:42:02.251 4945.00 IOPS, 19.32 MiB/s 00:42:02.251 Latency(us) 00:42:02.251 [2024-11-02T10:54:02.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:02.251 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:02.251 nvme0n1 : 1.02 4956.53 19.36 0.00 0.00 25536.86 4393.34 29127.11 00:42:02.251 [2024-11-02T10:54:02.653Z] =================================================================================================================== 00:42:02.251 [2024-11-02T10:54:02.653Z] Total : 4956.53 19.36 0.00 0.00 25536.86 4393.34 29127.11 00:42:02.251 { 00:42:02.251 "results": [ 00:42:02.251 { 00:42:02.251 "job": "nvme0n1", 00:42:02.251 "core_mask": "0x2", 00:42:02.251 "workload": "randrw", 00:42:02.251 "percentage": 50, 00:42:02.251 "status": "finished", 00:42:02.251 "queue_depth": 128, 00:42:02.251 "io_size": 4096, 00:42:02.251 "runtime": 1.023701, 00:42:02.251 "iops": 4956.525391691519, 00:42:02.251 "mibps": 19.361427311294996, 00:42:02.251 "io_failed": 0, 00:42:02.251 "io_timeout": 0, 00:42:02.251 "avg_latency_us": 25536.86028467569, 00:42:02.251 "min_latency_us": 4393.339259259259, 00:42:02.251 "max_latency_us": 29127.11111111111 00:42:02.251 } 00:42:02.251 ], 00:42:02.251 "core_count": 1 00:42:02.251 } 00:42:02.251 11:54:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:02.251 11:54:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:02.251 11:54:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.251 11:54:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:02.251 11:54:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.251 11:54:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:02.564 11:54:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:02.564 11:54:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:02.564 11:54:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:02.564 11:54:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:02.838 [2024-11-02 11:54:03.137162] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:02.838 [2024-11-02 11:54:03.138042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x730ac0 (107): Transport endpoint is not connected 00:42:02.838 [2024-11-02 11:54:03.139019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x730ac0 (9): Bad file descriptor 00:42:02.838 [2024-11-02 11:54:03.140017] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:02.838 [2024-11-02 11:54:03.140045] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:02.838 [2024-11-02 11:54:03.140071] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:02.838 [2024-11-02 11:54:03.140100] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:02.838 request: 00:42:02.838 { 00:42:02.838 "name": "nvme0", 00:42:02.838 "trtype": "tcp", 00:42:02.838 "traddr": "127.0.0.1", 00:42:02.838 "adrfam": "ipv4", 00:42:02.838 "trsvcid": "4420", 00:42:02.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:02.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:02.838 "prchk_reftag": false, 00:42:02.838 "prchk_guard": false, 00:42:02.838 "hdgst": false, 00:42:02.838 "ddgst": false, 00:42:02.838 "psk": "key1", 00:42:02.838 "allow_unrecognized_csi": false, 00:42:02.838 "method": "bdev_nvme_attach_controller", 00:42:02.838 "req_id": 1 00:42:02.838 } 00:42:02.838 Got JSON-RPC error response 00:42:02.838 response: 00:42:02.838 { 00:42:02.838 "code": -5, 00:42:02.838 "message": "Input/output error" 00:42:02.838 } 00:42:02.838 11:54:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:02.838 11:54:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:02.838 11:54:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:02.838 11:54:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:02.838 11:54:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:02.838 11:54:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:02.838 11:54:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:02.838 11:54:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:02.838 11:54:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.838 11:54:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:03.098 11:54:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:03.098 11:54:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:03.098 11:54:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:03.098 11:54:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:03.098 11:54:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:03.098 11:54:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:03.098 11:54:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:03.356 11:54:03 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:03.356 11:54:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:03.356 11:54:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:03.614 11:54:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:03.614 11:54:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:04.183 11:54:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:04.183 11:54:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:04.183 11:54:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:04.183 11:54:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:04.183 11:54:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.CcCClJsoPs 00:42:04.183 11:54:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:04.183 11:54:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:42:04.183 11:54:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:42:04.441 [2024-11-02 11:54:04.826046] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CcCClJsoPs': 0100660 00:42:04.441 [2024-11-02 11:54:04.826084] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:04.441 request: 00:42:04.441 { 00:42:04.441 "name": "key0", 00:42:04.441 "path": "/tmp/tmp.CcCClJsoPs", 00:42:04.441 "method": "keyring_file_add_key", 00:42:04.441 "req_id": 1 00:42:04.441 } 00:42:04.441 Got JSON-RPC error response 00:42:04.441 response: 00:42:04.441 { 00:42:04.441 "code": -1, 00:42:04.441 "message": "Operation not permitted" 00:42:04.441 } 00:42:04.699 11:54:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:04.699 11:54:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:04.699 11:54:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:04.699 11:54:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:04.699 11:54:04 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.CcCClJsoPs 00:42:04.699 11:54:04 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:42:04.699 11:54:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CcCClJsoPs 00:42:04.957 11:54:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.CcCClJsoPs 00:42:04.957 11:54:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:04.957 11:54:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:04.957 11:54:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:04.957 11:54:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:04.957 11:54:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:04.957 11:54:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:05.216 11:54:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:05.216 11:54:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:05.216 11:54:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.216 11:54:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.474 [2024-11-02 11:54:05.680392] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CcCClJsoPs': No such file or directory 00:42:05.474 [2024-11-02 11:54:05.680426] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:05.474 [2024-11-02 11:54:05.680459] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:05.474 [2024-11-02 11:54:05.680481] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:05.474 [2024-11-02 11:54:05.680502] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:05.474 [2024-11-02 11:54:05.680519] bdev_nvme.c:6576:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:05.474 request: 00:42:05.474 { 00:42:05.474 "name": "nvme0", 00:42:05.474 "trtype": "tcp", 00:42:05.474 "traddr": "127.0.0.1", 00:42:05.474 "adrfam": "ipv4", 00:42:05.474 "trsvcid": "4420", 00:42:05.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:05.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:05.474 "prchk_reftag": false, 00:42:05.474 "prchk_guard": false, 00:42:05.474 "hdgst": false, 00:42:05.474 "ddgst": false, 00:42:05.474 "psk": "key0", 00:42:05.474 "allow_unrecognized_csi": false, 00:42:05.474 "method": "bdev_nvme_attach_controller", 00:42:05.474 "req_id": 1 00:42:05.474 } 00:42:05.474 Got JSON-RPC error response 00:42:05.474 response: 00:42:05.474 { 00:42:05.474 "code": -19, 00:42:05.474 "message": "No such device" 00:42:05.474 } 00:42:05.474 11:54:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:05.474 11:54:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:05.474 11:54:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:05.474 11:54:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:05.474 11:54:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:05.474 11:54:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:05.732 11:54:05 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GH5EXF00qh 00:42:05.732 11:54:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:05.732 11:54:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:05.733 11:54:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:05.733 11:54:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:05.733 11:54:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:05.733 11:54:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:05.733 11:54:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:05.733 11:54:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GH5EXF00qh 00:42:05.733 11:54:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GH5EXF00qh 00:42:05.733 11:54:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.GH5EXF00qh 00:42:05.733 11:54:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GH5EXF00qh 00:42:05.733 11:54:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GH5EXF00qh 00:42:05.990 11:54:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:05.990 11:54:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:06.248 nvme0n1 00:42:06.248 11:54:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:06.248 11:54:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:06.507 11:54:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:06.507 11:54:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:06.507 11:54:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:06.507 11:54:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:06.766 11:54:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:06.766 11:54:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:06.766 11:54:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:07.025 11:54:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:07.025 11:54:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:07.025 11:54:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.025 11:54:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.025 11:54:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:07.283 11:54:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:07.283 11:54:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:07.283 11:54:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:07.283 11:54:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:07.283 11:54:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.283 11:54:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.283 11:54:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:07.542 11:54:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:07.542 11:54:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:07.542 11:54:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:07.800 11:54:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:07.800 11:54:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:07.800 11:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:08.059 11:54:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:08.059 11:54:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GH5EXF00qh 00:42:08.059 11:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GH5EXF00qh 00:42:08.317 11:54:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d95B5Q97nc 00:42:08.317 11:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d95B5Q97nc 00:42:08.576 11:54:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:08.576 11:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:09.145 nvme0n1 00:42:09.145 11:54:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:09.145 11:54:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:09.410 11:54:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:09.410 "subsystems": [ 00:42:09.410 { 00:42:09.410 "subsystem": "keyring", 00:42:09.410 "config": [ 00:42:09.410 { 00:42:09.410 "method": "keyring_file_add_key", 00:42:09.410 "params": { 00:42:09.410 "name": "key0", 00:42:09.410 "path": "/tmp/tmp.GH5EXF00qh" 00:42:09.410 } 00:42:09.410 }, 00:42:09.410 { 00:42:09.410 "method": "keyring_file_add_key", 00:42:09.410 "params": { 00:42:09.410 "name": "key1", 00:42:09.410 "path": "/tmp/tmp.d95B5Q97nc" 00:42:09.410 } 00:42:09.410 } 00:42:09.410 ] 00:42:09.410 }, 00:42:09.410 { 00:42:09.410 "subsystem": "iobuf", 00:42:09.410 "config": [ 00:42:09.410 { 00:42:09.410 "method": "iobuf_set_options", 00:42:09.410 "params": { 00:42:09.410 "small_pool_count": 8192, 00:42:09.410 "large_pool_count": 1024, 00:42:09.410 "small_bufsize": 8192, 00:42:09.410 "large_bufsize": 135168, 00:42:09.410 "enable_numa": false 00:42:09.410 } 00:42:09.410 } 00:42:09.410 ] 00:42:09.410 }, 00:42:09.410 { 00:42:09.410 "subsystem": "sock", 00:42:09.410 "config": [ 00:42:09.410 { 00:42:09.410 "method": "sock_set_default_impl", 00:42:09.410 "params": { 00:42:09.410 "impl_name": "posix" 00:42:09.410 } 00:42:09.410 }, 00:42:09.410 { 00:42:09.410 "method": "sock_impl_set_options", 00:42:09.410 "params": { 00:42:09.410 "impl_name": "ssl", 00:42:09.410 "recv_buf_size": 4096, 00:42:09.410 "send_buf_size": 4096, 00:42:09.410 "enable_recv_pipe": true, 00:42:09.410 "enable_quickack": false, 00:42:09.410 "enable_placement_id": 0, 00:42:09.410 "enable_zerocopy_send_server": true, 00:42:09.410 "enable_zerocopy_send_client": false, 00:42:09.410 "zerocopy_threshold": 0, 00:42:09.410 "tls_version": 0, 00:42:09.410 "enable_ktls": false 00:42:09.410 } 00:42:09.410 }, 00:42:09.410 { 00:42:09.410 "method": "sock_impl_set_options", 00:42:09.410 "params": { 00:42:09.410 "impl_name": "posix", 00:42:09.410 "recv_buf_size": 2097152, 00:42:09.410 "send_buf_size": 2097152, 00:42:09.410 "enable_recv_pipe": true, 00:42:09.411 "enable_quickack": false, 00:42:09.411 "enable_placement_id": 0, 00:42:09.411 "enable_zerocopy_send_server": true, 00:42:09.411 "enable_zerocopy_send_client": false, 00:42:09.411 "zerocopy_threshold": 0, 00:42:09.411 "tls_version": 0, 00:42:09.411 "enable_ktls": false 00:42:09.411 } 00:42:09.411 } 00:42:09.411 ] 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "subsystem": "vmd", 00:42:09.411 "config": [] 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "subsystem": "accel", 00:42:09.411 "config": [ 00:42:09.411 { 00:42:09.411 "method": "accel_set_options", 00:42:09.411 "params": { 00:42:09.411 "small_cache_size": 128, 00:42:09.411 "large_cache_size": 16, 00:42:09.411 "task_count": 2048, 00:42:09.411 "sequence_count": 2048, 00:42:09.411 "buf_count": 2048 00:42:09.411 } 00:42:09.411 } 00:42:09.411 ] 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "subsystem": "bdev", 00:42:09.411 "config": [ 00:42:09.411 { 00:42:09.411 "method": "bdev_set_options", 00:42:09.411 "params": { 00:42:09.411 "bdev_io_pool_size": 65535, 00:42:09.411 "bdev_io_cache_size": 256, 00:42:09.411 "bdev_auto_examine": true, 00:42:09.411 "iobuf_small_cache_size": 128, 00:42:09.411 "iobuf_large_cache_size": 16 00:42:09.411 } 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "method": "bdev_raid_set_options", 00:42:09.411 "params": { 00:42:09.411 "process_window_size_kb": 1024, 00:42:09.411 "process_max_bandwidth_mb_sec": 0 00:42:09.411 } 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "method": "bdev_iscsi_set_options", 00:42:09.411 "params": { 00:42:09.411 "timeout_sec": 30 00:42:09.411 } 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "method": "bdev_nvme_set_options", 00:42:09.411 "params": { 00:42:09.411 "action_on_timeout": "none", 00:42:09.411 "timeout_us": 0, 00:42:09.411 "timeout_admin_us": 0, 00:42:09.411 "keep_alive_timeout_ms": 10000, 00:42:09.411 "arbitration_burst": 0, 00:42:09.411 "low_priority_weight": 0, 00:42:09.411 "medium_priority_weight": 0, 00:42:09.411 "high_priority_weight": 0, 00:42:09.411 "nvme_adminq_poll_period_us": 10000, 00:42:09.411 "nvme_ioq_poll_period_us": 0, 00:42:09.411 "io_queue_requests": 512, 00:42:09.411 "delay_cmd_submit": true, 00:42:09.411 "transport_retry_count": 4, 00:42:09.411 "bdev_retry_count": 3, 00:42:09.411 "transport_ack_timeout": 0, 00:42:09.411 "ctrlr_loss_timeout_sec": 0, 00:42:09.411 "reconnect_delay_sec": 0, 00:42:09.411 "fast_io_fail_timeout_sec": 0, 00:42:09.411 "disable_auto_failback": false, 00:42:09.411 "generate_uuids": false, 00:42:09.411 "transport_tos": 0, 00:42:09.411 "nvme_error_stat": false, 00:42:09.411 "rdma_srq_size": 0, 00:42:09.411 "io_path_stat": false, 00:42:09.411 "allow_accel_sequence": false, 00:42:09.411 "rdma_max_cq_size": 0, 00:42:09.411 "rdma_cm_event_timeout_ms": 0, 00:42:09.411 "dhchap_digests": [ 00:42:09.411 "sha256", 00:42:09.411 "sha384", 00:42:09.411 "sha512" 00:42:09.411 ], 00:42:09.411 "dhchap_dhgroups": [ 00:42:09.411 "null", 00:42:09.411 "ffdhe2048", 00:42:09.411 "ffdhe3072", 00:42:09.411 "ffdhe4096", 00:42:09.411 "ffdhe6144", 00:42:09.411 "ffdhe8192" 00:42:09.411 ] 00:42:09.411 } 00:42:09.411 }, 00:42:09.411 { 00:42:09.411 "method": "bdev_nvme_attach_controller", 00:42:09.411 "params": { 00:42:09.412 "name": "nvme0", 00:42:09.412 "trtype": "TCP", 00:42:09.412 "adrfam": "IPv4", 00:42:09.412 "traddr": "127.0.0.1", 00:42:09.412 "trsvcid": "4420", 00:42:09.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:09.412 "prchk_reftag": false, 00:42:09.412 "prchk_guard": false, 00:42:09.412 "ctrlr_loss_timeout_sec": 0, 00:42:09.412 "reconnect_delay_sec": 0, 00:42:09.412 "fast_io_fail_timeout_sec": 0, 00:42:09.412 "psk": "key0", 00:42:09.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:09.412 "hdgst": false, 00:42:09.412 "ddgst": false, 00:42:09.412 "multipath": "multipath" 00:42:09.412 } 00:42:09.412 }, 00:42:09.412 { 00:42:09.412 "method": "bdev_nvme_set_hotplug", 00:42:09.412 "params": { 00:42:09.412 "period_us": 100000, 00:42:09.412 "enable": false 00:42:09.412 } 00:42:09.412 }, 00:42:09.412 { 00:42:09.412 "method": "bdev_wait_for_examine" 00:42:09.412 } 00:42:09.412 ] 00:42:09.412 }, 00:42:09.412 { 00:42:09.412 "subsystem": "nbd", 00:42:09.412 "config": [] 00:42:09.412 } 00:42:09.412 ] 00:42:09.412 }' 00:42:09.412 11:54:09 keyring_file -- keyring/file.sh@115 -- # killprocess 4059328 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 4059328 ']' 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@956 -- # kill -0 4059328 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4059328 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4059328' 00:42:09.412 killing process with pid 4059328 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@971 -- # kill 4059328 00:42:09.412 Received shutdown signal, test time was about 1.000000 seconds 00:42:09.412 00:42:09.412 Latency(us) 00:42:09.412 [2024-11-02T10:54:09.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:09.412 [2024-11-02T10:54:09.814Z] =================================================================================================================== 00:42:09.412 [2024-11-02T10:54:09.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:09.412 11:54:09 keyring_file -- common/autotest_common.sh@976 -- # wait 4059328 00:42:09.676 11:54:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=4061537 00:42:09.676 11:54:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4061537 /var/tmp/bperf.sock 00:42:09.676 11:54:09 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 4061537 ']' 00:42:09.676 11:54:09 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:09.676 11:54:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:09.676 11:54:09 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:09.676 11:54:09 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:09.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:09.676 11:54:09 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:09.676 11:54:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:09.676 11:54:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:09.676 "subsystems": [ 00:42:09.676 { 00:42:09.676 "subsystem": "keyring", 00:42:09.676 "config": [ 00:42:09.676 { 00:42:09.676 "method": "keyring_file_add_key", 00:42:09.676 "params": { 00:42:09.676 "name": "key0", 00:42:09.676 "path": "/tmp/tmp.GH5EXF00qh" 00:42:09.676 } 00:42:09.676 }, 00:42:09.676 { 00:42:09.676 "method": "keyring_file_add_key", 00:42:09.676 "params": { 00:42:09.676 "name": "key1", 00:42:09.676 "path": "/tmp/tmp.d95B5Q97nc" 00:42:09.676 } 00:42:09.676 } 00:42:09.676 ] 00:42:09.676 }, 00:42:09.676 { 00:42:09.676 "subsystem": "iobuf", 00:42:09.676 "config": [ 00:42:09.676 { 00:42:09.676 "method": "iobuf_set_options", 00:42:09.676 "params": { 00:42:09.676 "small_pool_count": 8192, 00:42:09.676 "large_pool_count": 1024, 00:42:09.676 "small_bufsize": 8192, 00:42:09.676 "large_bufsize": 135168, 00:42:09.676 "enable_numa": false 00:42:09.676 } 00:42:09.676 } 00:42:09.676 ] 00:42:09.676 }, 00:42:09.676 { 00:42:09.676 "subsystem": "sock", 00:42:09.676 "config": [ 00:42:09.676 { 00:42:09.676 "method": "sock_set_default_impl", 00:42:09.676 "params": { 00:42:09.676 "impl_name": "posix" 00:42:09.676 } 00:42:09.676 }, 00:42:09.676 { 00:42:09.676 "method": "sock_impl_set_options", 00:42:09.676 "params": { 00:42:09.676 "impl_name": "ssl", 00:42:09.676 "recv_buf_size": 4096, 00:42:09.676 "send_buf_size": 4096, 00:42:09.676 "enable_recv_pipe": true, 00:42:09.676 "enable_quickack": false, 00:42:09.676 "enable_placement_id": 0, 00:42:09.676 "enable_zerocopy_send_server": true, 00:42:09.676 "enable_zerocopy_send_client": false, 00:42:09.676 "zerocopy_threshold": 0, 00:42:09.676 "tls_version": 0, 00:42:09.676 "enable_ktls": false 00:42:09.676 } 00:42:09.676 }, 00:42:09.676 { 00:42:09.676 "method": "sock_impl_set_options", 00:42:09.676 "params": { 00:42:09.676 "impl_name": "posix", 00:42:09.676 "recv_buf_size": 2097152, 00:42:09.676 "send_buf_size": 2097152, 00:42:09.676 "enable_recv_pipe": true, 00:42:09.676 "enable_quickack": false, 00:42:09.676 "enable_placement_id": 0, 00:42:09.676 "enable_zerocopy_send_server": true, 00:42:09.676 "enable_zerocopy_send_client": false, 00:42:09.676 "zerocopy_threshold": 0, 00:42:09.676 "tls_version": 0, 00:42:09.676 "enable_ktls": false 00:42:09.676 } 00:42:09.676 } 00:42:09.676 ] 00:42:09.676 }, 00:42:09.676 { 00:42:09.676 "subsystem": "vmd", 00:42:09.676 "config": [] 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "subsystem": "accel", 00:42:09.677 "config": [ 00:42:09.677 { 00:42:09.677 "method": "accel_set_options", 00:42:09.677 "params": { 00:42:09.677 "small_cache_size": 128, 00:42:09.677 "large_cache_size": 16, 00:42:09.677 "task_count": 2048, 00:42:09.677 "sequence_count": 2048, 00:42:09.677 "buf_count": 2048 00:42:09.677 } 00:42:09.677 } 00:42:09.677 ] 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "subsystem": "bdev", 00:42:09.677 "config": [ 00:42:09.677 { 00:42:09.677 "method": "bdev_set_options", 00:42:09.677 "params": { 00:42:09.677 "bdev_io_pool_size": 65535, 00:42:09.677 "bdev_io_cache_size": 256, 00:42:09.677 "bdev_auto_examine": true, 00:42:09.677 "iobuf_small_cache_size": 128, 00:42:09.677 "iobuf_large_cache_size": 16 00:42:09.677 } 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "method": "bdev_raid_set_options", 00:42:09.677 "params": { 00:42:09.677 "process_window_size_kb": 1024, 00:42:09.677 "process_max_bandwidth_mb_sec": 0 00:42:09.677 } 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "method": "bdev_iscsi_set_options", 00:42:09.677 "params": { 00:42:09.677 "timeout_sec": 30 00:42:09.677 } 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "method": "bdev_nvme_set_options", 00:42:09.677 "params": { 00:42:09.677 "action_on_timeout": "none", 00:42:09.677 "timeout_us": 0, 00:42:09.677 "timeout_admin_us": 0, 00:42:09.677 "keep_alive_timeout_ms": 10000, 00:42:09.677 "arbitration_burst": 0, 00:42:09.677 "low_priority_weight": 0, 00:42:09.677 "medium_priority_weight": 0, 00:42:09.677 "high_priority_weight": 0, 00:42:09.677 "nvme_adminq_poll_period_us": 10000, 00:42:09.677 "nvme_ioq_poll_period_us": 0, 00:42:09.677 "io_queue_requests": 512, 00:42:09.677 "delay_cmd_submit": true, 00:42:09.677 "transport_retry_count": 4, 00:42:09.677 "bdev_retry_count": 3, 00:42:09.677 "transport_ack_timeout": 0, 00:42:09.677 "ctrlr_loss_timeout_sec": 0, 00:42:09.677 "reconnect_delay_sec": 0, 00:42:09.677 "fast_io_fail_timeout_sec": 0, 00:42:09.677 "disable_auto_failback": false, 00:42:09.677 "generate_uuids": false, 00:42:09.677 "transport_tos": 0, 00:42:09.677 "nvme_error_stat": false, 00:42:09.677 "rdma_srq_size": 0, 00:42:09.677 "io_path_stat": false, 00:42:09.677 "allow_accel_sequence": false, 00:42:09.677 "rdma_max_cq_size": 0, 00:42:09.677 "rdma_cm_event_timeout_ms": 0, 00:42:09.677 "dhchap_digests": [ 00:42:09.677 "sha256", 00:42:09.677 "sha384", 00:42:09.677 "sha512" 00:42:09.677 ], 00:42:09.677 "dhchap_dhgroups": [ 00:42:09.677 "null", 00:42:09.677 "ffdhe2048", 00:42:09.677 "ffdhe3072", 00:42:09.677 "ffdhe4096", 00:42:09.677 "ffdhe6144", 00:42:09.677 "ffdhe8192" 00:42:09.677 ] 00:42:09.677 } 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "method": "bdev_nvme_attach_controller", 00:42:09.677 "params": { 00:42:09.677 "name": "nvme0", 00:42:09.677 "trtype": "TCP", 00:42:09.677 "adrfam": "IPv4", 00:42:09.677 "traddr": "127.0.0.1", 00:42:09.677 "trsvcid": "4420", 00:42:09.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:09.677 "prchk_reftag": false, 00:42:09.677 "prchk_guard": false, 00:42:09.677 "ctrlr_loss_timeout_sec": 0, 00:42:09.677 "reconnect_delay_sec": 0, 00:42:09.677 "fast_io_fail_timeout_sec": 0, 00:42:09.677 "psk": "key0", 00:42:09.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:09.677 "hdgst": false, 00:42:09.677 "ddgst": false, 00:42:09.677 "multipath": "multipath" 00:42:09.677 } 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "method": "bdev_nvme_set_hotplug", 00:42:09.677 "params": { 00:42:09.677 "period_us": 100000, 00:42:09.677 "enable": false 00:42:09.677 } 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "method": "bdev_wait_for_examine" 00:42:09.677 } 00:42:09.677 ] 00:42:09.677 }, 00:42:09.677 { 00:42:09.677 "subsystem": "nbd", 00:42:09.677 "config": [] 00:42:09.677 } 00:42:09.677 ] 00:42:09.677 }' 00:42:09.677 [2024-11-02 11:54:09.906007] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:42:09.677 [2024-11-02 11:54:09.906098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061537 ] 00:42:09.677 [2024-11-02 11:54:09.972497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.677 [2024-11-02 11:54:10.022247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:09.935 [2024-11-02 11:54:10.202979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:09.935 11:54:10 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:09.935 11:54:10 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:09.935 11:54:10 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:09.935 11:54:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.935 11:54:10 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:10.193 11:54:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:10.193 11:54:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:10.193 11:54:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:10.193 11:54:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:10.451 11:54:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.451 11:54:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:10.451 11:54:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.709 11:54:10 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:10.709 11:54:10 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:10.709 11:54:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:10.709 11:54:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:10.709 11:54:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:10.709 11:54:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:10.709 11:54:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:10.968 11:54:11 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:10.968 11:54:11 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:10.968 11:54:11 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:10.968 11:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:11.226 11:54:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:11.226 11:54:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:11.226 11:54:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.GH5EXF00qh /tmp/tmp.d95B5Q97nc 00:42:11.226 11:54:11 keyring_file -- keyring/file.sh@20 -- # killprocess 4061537 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 4061537 ']' 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@956 -- # kill -0 4061537 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4061537 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4061537' 00:42:11.226 killing process with pid 4061537 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@971 -- # kill 4061537 00:42:11.226 Received shutdown signal, test time was about 1.000000 seconds 00:42:11.226 00:42:11.226 Latency(us) 00:42:11.226 [2024-11-02T10:54:11.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.226 [2024-11-02T10:54:11.628Z] =================================================================================================================== 00:42:11.226 [2024-11-02T10:54:11.628Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:11.226 11:54:11 keyring_file -- common/autotest_common.sh@976 -- # wait 4061537 00:42:11.486 11:54:11 keyring_file -- keyring/file.sh@21 -- # killprocess 4059313 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 4059313 ']' 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@956 -- # kill -0 4059313 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4059313 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4059313' 00:42:11.486 killing process with pid 4059313 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@971 -- # kill 4059313 00:42:11.486 11:54:11 keyring_file -- common/autotest_common.sh@976 -- # wait 4059313 00:42:11.745 00:42:11.745 real 0m14.935s 00:42:11.745 user 0m37.780s 00:42:11.745 sys 0m3.242s 00:42:11.745 11:54:12 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:11.745 11:54:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:11.745 ************************************ 00:42:11.745 END TEST keyring_file 00:42:11.745 ************************************ 00:42:11.745 11:54:12 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:11.745 11:54:12 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:11.745 11:54:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:11.745 11:54:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:11.745 11:54:12 -- common/autotest_common.sh@10 -- # set +x 00:42:12.005 ************************************ 00:42:12.005 START TEST keyring_linux 00:42:12.005 ************************************ 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:12.005 Joined session keyring: 287080514 00:42:12.005 * Looking for test storage... 00:42:12.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:12.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.005 --rc genhtml_branch_coverage=1 00:42:12.005 --rc genhtml_function_coverage=1 00:42:12.005 --rc genhtml_legend=1 00:42:12.005 --rc geninfo_all_blocks=1 00:42:12.005 --rc geninfo_unexecuted_blocks=1 00:42:12.005 00:42:12.005 ' 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:12.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.005 --rc genhtml_branch_coverage=1 00:42:12.005 --rc genhtml_function_coverage=1 00:42:12.005 --rc genhtml_legend=1 00:42:12.005 --rc geninfo_all_blocks=1 00:42:12.005 --rc geninfo_unexecuted_blocks=1 00:42:12.005 00:42:12.005 ' 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:12.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.005 --rc genhtml_branch_coverage=1 00:42:12.005 --rc genhtml_function_coverage=1 00:42:12.005 --rc genhtml_legend=1 00:42:12.005 --rc geninfo_all_blocks=1 00:42:12.005 --rc geninfo_unexecuted_blocks=1 00:42:12.005 00:42:12.005 ' 00:42:12.005 11:54:12 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:12.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.005 --rc genhtml_branch_coverage=1 00:42:12.005 --rc genhtml_function_coverage=1 00:42:12.005 --rc genhtml_legend=1 00:42:12.005 --rc geninfo_all_blocks=1 00:42:12.005 --rc geninfo_unexecuted_blocks=1 00:42:12.005 00:42:12.005 ' 00:42:12.005 11:54:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:12.005 11:54:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.005 11:54:12 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.005 11:54:12 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.005 11:54:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.005 11:54:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.006 11:54:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.006 11:54:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:12.006 11:54:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:12.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:12.006 /tmp/:spdk-test:key0 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:12.006 11:54:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:12.006 11:54:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:12.006 /tmp/:spdk-test:key1 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4061894 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:12.006 11:54:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4061894 00:42:12.006 11:54:12 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 4061894 ']' 00:42:12.006 11:54:12 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:12.006 11:54:12 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:12.006 11:54:12 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:12.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:12.006 11:54:12 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:12.006 11:54:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:12.266 [2024-11-02 11:54:12.453029] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:42:12.266 [2024-11-02 11:54:12.453111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061894 ] 00:42:12.266 [2024-11-02 11:54:12.519338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.267 [2024-11-02 11:54:12.569968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:12.526 11:54:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:12.526 [2024-11-02 11:54:12.846941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:12.526 null0 00:42:12.526 [2024-11-02 11:54:12.878999] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:12.526 [2024-11-02 11:54:12.879548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.526 11:54:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:12.526 303792330 00:42:12.526 11:54:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:12.526 85427548 00:42:12.526 11:54:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4061913 00:42:12.526 11:54:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4061913 /var/tmp/bperf.sock 00:42:12.526 11:54:12 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 4061913 ']' 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:12.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:12.526 11:54:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:12.785 [2024-11-02 11:54:12.949241] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 23.11.0 initialization... 00:42:12.785 [2024-11-02 11:54:12.949325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061913 ] 00:42:12.785 [2024-11-02 11:54:13.020403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.785 [2024-11-02 11:54:13.069541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.043 11:54:13 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:13.043 11:54:13 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:13.043 11:54:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:13.043 11:54:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:13.301 11:54:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:13.301 11:54:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:13.560 11:54:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:13.560 11:54:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:13.818 [2024-11-02 11:54:14.086301] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:13.818 nvme0n1 00:42:13.818 11:54:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:13.818 11:54:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:13.818 11:54:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:13.818 11:54:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:13.818 11:54:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:13.818 11:54:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.076 11:54:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:14.076 11:54:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:14.076 11:54:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:14.076 11:54:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:14.076 11:54:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:14.076 11:54:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:14.076 11:54:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@25 -- # sn=303792330 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 303792330 == \3\0\3\7\9\2\3\3\0 ]] 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 303792330 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:14.644 11:54:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:14.644 Running I/O for 1 seconds... 00:42:15.579 5258.00 IOPS, 20.54 MiB/s 00:42:15.579 Latency(us) 00:42:15.579 [2024-11-02T10:54:15.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.579 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:15.579 nvme0n1 : 1.02 5287.56 20.65 0.00 0.00 24009.70 7815.77 31845.64 00:42:15.579 [2024-11-02T10:54:15.981Z] =================================================================================================================== 00:42:15.579 [2024-11-02T10:54:15.981Z] Total : 5287.56 20.65 0.00 0.00 24009.70 7815.77 31845.64 00:42:15.579 { 00:42:15.579 "results": [ 00:42:15.579 { 00:42:15.579 "job": "nvme0n1", 00:42:15.579 "core_mask": "0x2", 00:42:15.579 "workload": "randread", 00:42:15.579 "status": "finished", 00:42:15.579 "queue_depth": 128, 00:42:15.579 "io_size": 4096, 00:42:15.579 "runtime": 1.018807, 00:42:15.579 "iops": 5287.556917060837, 00:42:15.579 "mibps": 20.654519207268894, 00:42:15.579 "io_failed": 0, 00:42:15.579 "io_timeout": 0, 00:42:15.579 "avg_latency_us": 24009.703337939758, 00:42:15.579 "min_latency_us": 7815.774814814815, 00:42:15.579 "max_latency_us": 31845.64148148148 00:42:15.579 } 00:42:15.579 ], 00:42:15.579 "core_count": 1 00:42:15.579 } 00:42:15.579 11:54:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:15.579 11:54:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:15.837 11:54:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:15.837 11:54:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:15.837 11:54:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:15.838 11:54:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:15.838 11:54:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:15.838 11:54:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.096 11:54:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:16.096 11:54:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:16.096 11:54:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:16.097 11:54:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:16.097 11:54:16 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:16.097 11:54:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:16.356 [2024-11-02 11:54:16.717036] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:16.356 [2024-11-02 11:54:16.717624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b7870 (107): Transport endpoint is not connected 00:42:16.356 [2024-11-02 11:54:16.718609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b7870 (9): Bad file descriptor 00:42:16.356 [2024-11-02 11:54:16.719607] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:16.356 [2024-11-02 11:54:16.719635] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:16.356 [2024-11-02 11:54:16.719661] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:16.356 [2024-11-02 11:54:16.719692] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:16.356 request: 00:42:16.356 { 00:42:16.356 "name": "nvme0", 00:42:16.356 "trtype": "tcp", 00:42:16.356 "traddr": "127.0.0.1", 00:42:16.356 "adrfam": "ipv4", 00:42:16.356 "trsvcid": "4420", 00:42:16.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:16.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:16.356 "prchk_reftag": false, 00:42:16.356 "prchk_guard": false, 00:42:16.356 "hdgst": false, 00:42:16.356 "ddgst": false, 00:42:16.356 "psk": ":spdk-test:key1", 00:42:16.356 "allow_unrecognized_csi": false, 00:42:16.356 "method": "bdev_nvme_attach_controller", 00:42:16.356 "req_id": 1 00:42:16.356 } 00:42:16.356 Got JSON-RPC error response 00:42:16.356 response: 00:42:16.356 { 00:42:16.356 "code": -5, 00:42:16.356 "message": "Input/output error" 00:42:16.356 } 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@33 -- # sn=303792330 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 303792330 00:42:16.356 1 links removed 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@33 -- # sn=85427548 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 85427548 00:42:16.356 1 links removed 00:42:16.356 11:54:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4061913 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 4061913 ']' 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 4061913 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:42:16.356 11:54:16 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4061913 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4061913' 00:42:16.615 killing process with pid 4061913 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@971 -- # kill 4061913 00:42:16.615 Received shutdown signal, test time was about 1.000000 seconds 00:42:16.615 00:42:16.615 Latency(us) 00:42:16.615 [2024-11-02T10:54:17.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:16.615 [2024-11-02T10:54:17.017Z] =================================================================================================================== 00:42:16.615 [2024-11-02T10:54:17.017Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@976 -- # wait 4061913 00:42:16.615 11:54:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4061894 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 4061894 ']' 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 4061894 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4061894 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4061894' 00:42:16.615 killing process with pid 4061894 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@971 -- # kill 4061894 00:42:16.615 11:54:16 keyring_linux -- common/autotest_common.sh@976 -- # wait 4061894 00:42:17.182 00:42:17.182 real 0m5.204s 00:42:17.182 user 0m10.060s 00:42:17.182 sys 0m1.589s 00:42:17.182 11:54:17 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:17.182 11:54:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:17.182 ************************************ 00:42:17.182 END TEST keyring_linux 00:42:17.182 ************************************ 00:42:17.182 11:54:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:17.182 11:54:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:17.182 11:54:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:17.182 11:54:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:17.182 11:54:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:17.182 11:54:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:17.182 11:54:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:17.182 11:54:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:17.182 11:54:17 -- common/autotest_common.sh@10 -- # set +x 00:42:17.182 11:54:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:17.182 11:54:17 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:42:17.182 11:54:17 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:42:17.182 11:54:17 -- common/autotest_common.sh@10 -- # set +x 00:42:19.090 INFO: APP EXITING 00:42:19.090 INFO: killing all VMs 00:42:19.090 INFO: killing vhost app 00:42:19.090 INFO: EXIT DONE 00:42:20.025 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:20.025 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:20.025 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:20.025 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:20.025 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:20.025 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:20.025 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:20.025 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:20.025 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:20.025 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:20.025 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:20.025 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:20.025 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:20.025 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:20.025 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:20.025 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:20.284 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:21.661 Cleaning 00:42:21.661 Removing: /var/run/dpdk/spdk0/config 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:21.661 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:21.661 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:21.661 Removing: /var/run/dpdk/spdk1/config 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:21.661 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:21.661 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:21.661 Removing: /var/run/dpdk/spdk2/config 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:21.661 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:21.661 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:21.661 Removing: /var/run/dpdk/spdk3/config 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:21.661 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:21.661 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:21.661 Removing: /var/run/dpdk/spdk4/config 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:21.661 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:21.661 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:21.661 Removing: /dev/shm/bdev_svc_trace.1 00:42:21.661 Removing: /dev/shm/nvmf_trace.0 00:42:21.661 Removing: /dev/shm/spdk_tgt_trace.pid3679752 00:42:21.661 Removing: /var/run/dpdk/spdk0 00:42:21.661 Removing: /var/run/dpdk/spdk1 00:42:21.661 Removing: /var/run/dpdk/spdk2 00:42:21.661 Removing: /var/run/dpdk/spdk3 00:42:21.661 Removing: /var/run/dpdk/spdk4 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3678069 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3678809 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3679752 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3680098 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3680778 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3680918 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3681630 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3681761 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3682019 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3683225 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3684148 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3684458 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3684658 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3684871 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3685076 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3685293 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3685500 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3685686 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3685980 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3688994 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3689203 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3689438 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3689451 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3689758 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3689881 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3690192 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3690234 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3690483 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3690495 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3690657 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3690788 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3691164 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3691327 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3691646 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3693757 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3696393 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3703390 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3703794 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3706318 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3706484 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3709120 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3712866 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3715056 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3721950 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3727329 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3728548 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3729202 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3739585 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3742005 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3797340 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3800592 00:42:21.661 Removing: /var/run/dpdk/spdk_pid3804548 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3808824 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3808833 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3809492 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3810148 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3810687 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3811088 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3811207 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3811355 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3811478 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3811490 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3812142 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3812680 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3813445 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3813847 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3813877 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3814511 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3815507 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3816249 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3821575 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3850394 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3853313 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3854488 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3855808 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3855952 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3856093 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3856226 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3856671 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3858001 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3858739 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3859059 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3860656 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3861076 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3861607 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3864018 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3867929 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3867930 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3867931 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3870124 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3872343 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3875756 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3898791 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3901557 00:42:21.662 Removing: /var/run/dpdk/spdk_pid3905340 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3906284 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3907390 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3908353 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3911288 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3913532 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3917772 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3917885 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3920671 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3920921 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3921061 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3921327 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3921334 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3922528 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3923704 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3924881 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3926182 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3927863 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3929159 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3932976 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3933306 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3934707 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3935451 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3939169 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3941135 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3944546 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3947832 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3954234 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3959317 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3959319 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3971963 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3972375 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3972895 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3973303 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3973878 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3974286 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3974702 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3975105 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3977606 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3977784 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3981547 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3981713 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3985077 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3987563 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3995090 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3995489 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3997990 00:42:21.921 Removing: /var/run/dpdk/spdk_pid3998198 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4000763 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4004448 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4006606 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4012845 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4018039 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4019232 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4019898 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4030633 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4032838 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4034807 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4039849 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4039854 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4042760 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4044155 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4045547 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4046403 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4047723 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4048582 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4053863 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4054243 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4054636 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4056190 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4056567 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4056863 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4059313 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4059328 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4061537 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4061894 00:42:21.921 Removing: /var/run/dpdk/spdk_pid4061913 00:42:21.921 Clean 00:42:21.921 11:54:22 -- common/autotest_common.sh@1451 -- # return 0 00:42:21.921 11:54:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:21.921 11:54:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:21.921 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:42:21.921 11:54:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:21.921 11:54:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:21.921 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:42:22.180 11:54:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:22.181 11:54:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:22.181 11:54:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:22.181 11:54:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:22.181 11:54:22 -- spdk/autotest.sh@394 -- # hostname 00:42:22.181 11:54:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:22.181 geninfo: WARNING: invalid characters removed from testname! 00:42:54.289 11:54:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:57.599 11:54:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:00.893 11:55:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:03.433 11:55:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:06.728 11:55:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:09.270 11:55:09 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:12.567 11:55:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:12.567 11:55:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:12.567 11:55:12 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:12.567 11:55:12 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:12.567 11:55:12 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:12.567 11:55:12 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:12.567 + [[ -n 3585387 ]] 00:43:12.567 + sudo kill 3585387 00:43:12.578 [Pipeline] } 00:43:12.593 [Pipeline] // stage 00:43:12.598 [Pipeline] } 00:43:12.612 [Pipeline] // timeout 00:43:12.617 [Pipeline] } 00:43:12.631 [Pipeline] // catchError 00:43:12.636 [Pipeline] } 00:43:12.652 [Pipeline] // wrap 00:43:12.658 [Pipeline] } 00:43:12.671 [Pipeline] // catchError 00:43:12.680 [Pipeline] stage 00:43:12.682 [Pipeline] { (Epilogue) 00:43:12.696 [Pipeline] catchError 00:43:12.698 [Pipeline] { 00:43:12.711 [Pipeline] echo 00:43:12.713 Cleanup processes 00:43:12.719 [Pipeline] sh 00:43:13.006 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:13.006 4074247 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:13.020 [Pipeline] sh 00:43:13.307 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:13.307 ++ grep -v 'sudo pgrep' 00:43:13.307 ++ awk '{print $1}' 00:43:13.307 + sudo kill -9 00:43:13.307 + true 00:43:13.319 [Pipeline] sh 00:43:13.604 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:25.815 [Pipeline] sh 00:43:26.103 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:26.103 Artifacts sizes are good 00:43:26.118 [Pipeline] archiveArtifacts 00:43:26.125 Archiving artifacts 00:43:26.317 [Pipeline] sh 00:43:26.620 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:26.635 [Pipeline] cleanWs 00:43:26.645 [WS-CLEANUP] Deleting project workspace... 00:43:26.645 [WS-CLEANUP] Deferred wipeout is used... 00:43:26.652 [WS-CLEANUP] done 00:43:26.654 [Pipeline] } 00:43:26.671 [Pipeline] // catchError 00:43:26.683 [Pipeline] sh 00:43:26.965 + logger -p user.info -t JENKINS-CI 00:43:26.973 [Pipeline] } 00:43:26.986 [Pipeline] // stage 00:43:26.992 [Pipeline] } 00:43:27.006 [Pipeline] // node 00:43:27.011 [Pipeline] End of Pipeline 00:43:27.061 Finished: SUCCESS